ESP8266 EEPROM Address Overflows

So I’ve finally been getting some time into using the ESP8266 chips I bought (variously, and many of) over the past month. I had some serious trouble getting them to work initially, due to a problem which I’ll be explaining in depth on another blog post. Hint: The problem wasn’t what I thought it was (or most people think it is, for that matter.)

This post is about what happens when you try to write a program (larger than 256kB) designed for the 1024kB version of the ESP-01 module to a 512kB version of the ESP-01 module.

This is what happens:

 ets Jan  8 2013,rst cause:1, boot mode:(3,6)

load 0x4010f000, len 1264, room 16 
tail 0
chksum 0x42
csum 0x42
~ld
"@ªrjrA(!‹SËq‹Ðšv›X%.ù+Ðyêeh©."8át.ÍɪNøêI*A¬1‚-...)DЪ.B!ûþA¬!.þNé*E"HHIîA.T[MºöA,T[ͯQá"@ªRjRA(!‹SËu‹ÐšÖ›X%(ý«ÐYêeH).".át.íͪNøêÉ*E¬1‚-.1.©ÄЪ.Â!ûþA¬!.þNé*A"HhiîA.T[MºöA,T[ͯQá

This is what is supposed to happen:

.
 ets Jan  8 2013,rst cause:1, boot mode:(3,7)

load 0x4010f000, len 1264, room 16 
tail 0
chksum 0x42
csum 0x42
~ld
Reset reason: REASON_DEFAULT_RST
Normal, first power-on bootup.
Flash real   id: 001340C8
Flash real size: 524288

Flash IDE  size: 524288
Flash IDE speed: 40000000
Flash IDE  mode: QIO
Flash Chip configuration ok.

Mounting SPIFFS. (Attempt: 00)
Configuration file exists.
Config file size: 118
Created buffer.
Read the config file.
Created DynamicJsonBuffer.
JSON parsed correctly.
gps
1351824120
48.76
2.30

The problem, as far as I can understand it, is that the EEPROM chip is programmed incorrectly. I suspect maybe the addresses are wrapping around somehow, or the SPIFFS area is marked as being too large. In the 1024kB memory map, the 256kB SPIFFS image gets written starting at 0xbb000, which is at the 748kB boundary. But if you try to burn this same image to a 512kB EEPROM chip, I think the addresses wrap around, and overwrite the code you are trying to run, which starts at 0x00000.

Either way, garbage input = garbage output.

The problem is that the problem behaves inconsistently as well, and for novice programmers, it may not be apparent that the cause is not software-based, but hardware-based. Keep in mind, as well, that a minimal firmware that pulls in all of the Arduino code starts at nearly 240kB code size. The only reason I thought more heavily about EEPROM size as a potential source of problems, was that by adding the ArduinoJson library, the final code size started crossing over the 256kB boundary, which was when the garbage output and chip hangs started to happen.

One other note, there’s an example called CheckFlashConfig, located in the esp8266 library examples for Arduino. I would highly recommend people to use this code to check whether you have a flash size mismatch at runtime. The ESP8266 modules don’t have consistent specs, so two ESP-01 modules could have different EEPROM chip sizes, and cause identical code to potentially error out on one of them.

Final note, there seem to be some potential problems with the SPIFFS implementation hanging if it is unable to finish running the SPIFFS.begin() call. I tried using the .begin() in a loop, generating Attempt: xx strings along the way, but the implementation would always hang if it failed to mount the SPIFFS block the first time.

Smooth Video for Intel Graphics on Ubuntu

Not sure why the makers of Ubuntu always opt for stupid defaults. I had to do some digging to figure out why the graphics drivers on my little media computer were causing tearing and horizontal refresh flickering when I was watching video.

Come on guys, vertical sync (vsync) should be ON by default on Intel graphics drivers. No one is using these things for games, and it’d improve the gaming experience anyways. This is a no brainer.

Put this in /etc/X11/xorg.conf.d/20-intel.conf [source]:

Section "Device"
   Identifier  "Intel Graphics"
   Driver      "intel"
   Option      "TearFree"    "true"
EndSection

sprintf / snprintf Problem on Arduino.

What a day. Spent the day hunting a bug in my code, only to find out that it wasn’t in my code. (Update: Turns out my code was wrong in two ways, more below the original entry.) (Update 2: There are options to set this in the Arduino IDE Preferences)

There’s an error in the sprintf and snprintf implementation on Arduino that occurs when more than 8 varargs are passed in after the format specifier.

Continue reading sprintf / snprintf Problem on Arduino.

ffmpeg: Stripping Audio and Scaling Down Video

Rather than use animated GIFs when I’m trying to show a video without sound, I prefer to use ffmpeg to strip out the audio and scale down the video. I’ve looked this command up way too many times:

ffmpeg -i test.mp4 -c:v libx264 -profile:v baseline -vf scale=640:-1 -an test-640.mp4

Update, 7 October 2018

Twitter has some video requirements such as a maximum framerate, that need to be accounted for:

ffmpeg -i test.mp4 -c:v libx264 -profile:v baseline -vf scale=540:-1 -t 30 -r:0.0 30 test-540.mp4

The above command will rescale the video to a quarter of Full HD resolution in portrait orientation, set a duration of 30 seconds (the -t option) and set the output video stream 0.0 framerate to 30 frames per second.

Update, 2 July 2019

ffmpeg -i input.mp4 -c:v libx264 -profile:v baseline -vf scale=1280:-1 -an -ss 5.5 test-1280.mp4

Set the start time at 5.500 seconds and scale to 1280px width.

ffmpeg -i input.mp4 -vframes 1 -vf scale=1280:-1 "poster.jpg"

Create a poster image for the video file.

Update, 30 August 2019

Measure-Command { ffmpeg -i video-nonoisereduct-noedgeenhance-with-log-medium.mp4 -c:v ffvhuff -an -filter:v "scale=640:-1" test-output-ffvhuff-640.mkv }

Measuring the amount of time ffmpeg takes to transcode on Windows using Powershell (similar to the time command on Linux).

This was a test of ffvhuff as a codec, and pushing the file down into a proxy-file sized format, something I’ll be learning more about in the future.

Update, 11 December 2019

This one had a bit of audio hum in it that I wanted to remove using Audacity.

Step 1: Copy and Trim Off 15s

ffmpeg -i "input.avi" -c:a copy -c:v copy -ss 15 output-1.mkv

Step 2: Copy audio out

ffmpeg -i output-1.mkv -c:a copy output-2.wav

Step 3: Edit in Audacity

Amplify:

Remove the 60Hz hum using https://wiki.audacityteam.org/wiki/Nyquist_Effect_Plug-ins#Hum_Remover and the following settings:

Remove remaining noise with Noise Reduction effect:

Step 4: Remerge audio, replacing existing audio, re-encode video, and deinterlace.

ffmpeg -i output-1.mkv -i output-3.wav -ar 32k -c:a aac -b:a 128k -c:v libx264 -profile:v main -pix_fmt yuv420p -movflags +faststart -preset veryslow -crf 17 -vf "yadif=mode=1" -map 0:v:0 -map 1:a:0 -t 10 output-4.mp4

Update, 30 May 2020

GitHub Markdown only allows images at the moment, so I actually do use animated GIFs there.

You can convert a video to GIF directly, with a generated palette based on the most-used colors in the video, which is important to make the GIF look good.

ffmpeg -i test.mp4 -an -filter_complex "[0:v] palettegen [palette]; [0:v][palette] paletteuse" test.gif

The filter_complex filtergraph feeds the video feed from test.mp4 into palettegen, which outputs its information to the palette feed; then, the filtergraph feeds the video feed from test.mp4 and the palette into paletteuse which is then applied to the output frames to create the GIF.

Animated GIFs are most useful for things like command-line captures:

Animated GIF using palette generated by ffmpeg.

This animated GIF comes from the GoodParallel project.

Update, 8 June 2020 (Screencast Edition)

When doing the initial recordings with OBS, use -crf0 -preset ultrafast to ensure that the screencast is losslessly captured at the full refresh rate, also using BT.709 and the Full color profile.

On my ancient Intel i7-3770 CPU, I can capture ~150 frames/sec using the ultrafast preset, which is more than enough to do 1440p60 screencasts and prevent dropped frames.

Note that the files produced with this method will not play back directly in the Windows 10 Movie Player. I use ffplay from the command line to view them. The files are also enormous, because we are trading storage for CPU time during capture.

For shrinking and archiving those videos, use -crf 0 -preset veryslow -color_primaries bt709 -color_trc bt709 -colorspace bt709. The output is again lossless and we tell ffmpeg to preserve the original colorspace information, which otherwise is dropped.

Using the veryslow preset, we trade CPU time for storage and use more computationally-expensive bidirectional predictor frames to eliminate as much redundant information from the input file as possible.

Storage savings of up to 90% are possible and the output remains lossless. I haven’t yet experimented with -c:v libx265 to see if that offers even better lossless archival compression. For screencast captures, which usually have lots of low-entropy regions, it should be reasonable to expect a high compression ratio.

For publishing, I might use -vf “scale=1920:-2” -crf 17 -preset veryslow -color_primaries bt709 -color_trc bt709 -colorspace bt709 or similar, though generation loss could be an issue if running the video again through a transcoder at YouTube, et al.

Remember to always provide the colorspace flags, and check the source material to see if it provides this information using mediainfo.

Different encoders yield wildly different colorspace info.

Here’s a sample of 2160p30 video from my smartphone (Qualcomm Snapdragon 835 + Adreno 540), it uses BT.601 NTSC:

Video
ID : 2
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L5.1
Format settings : CABAC / 1 Ref Frames
Format settings, CABAC : Yes
Format settings, ReFrames : 1 frame
Format settings, GOP : M=1, N=30
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 57 s 319 ms
Source duration : 57 s 318 ms
Bit rate : 48.0 Mb/s
Width : 3 840 pixels
Height : 2 160 pixels
Display aspect ratio : 16:9
Frame rate mode : Variable
Frame rate : 30.000 FPS
Minimum frame rate : 29.479 FPS
Maximum frame rate : 30.364 FPS
Standard : NTSC
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.193
Stream size : 328 MiB (100%)
Source stream size : 328 MiB (100%)
Title : VideoHandle
Language : English
Encoded date : UTC 2020-06-09 11:49:52
Tagged date : UTC 2020-06-09 11:49:52
Color range : Full
Color primaries : BT.601 NTSC
Transfer characteristics : BT.601
Matrix coefficients : BT.601

mdhd_Duration : 57319

And here’s a sample of the 720p timelapse video information from the same smartphone, it uses BT.601 PAL (which is weird that the normal and timelapse modes use different colorspaces):

Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L3.1
Format settings : CABAC / 1 Ref Frames
Format settings, CABAC : Yes
Format settings, ReFrames : 1 frame
Format settings, GOP : M=1, N=30
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 54 s 700 ms
Bit rate : 12.0 Mb/s
Width : 1 280 pixels
Height : 720 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 30.000 FPS
Standard : NTSC
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.434
Stream size : 78.3 MiB (100%)
Title : VideoHandle
Language : English
Encoded date : UTC 2020-06-09 13:55:33
Tagged date : UTC 2020-06-09 13:55:33
Color range : Full
Color primaries : BT.601 PAL
Transfer characteristics : BT.601
Matrix coefficients : BT.601

And here’s the 1440p60 screencast using OBS as the recording software, it uses BT.709:

Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High 4:4:4 Predictive@L5.1
Format settings : 1 Ref Frames
Format settings, CABAC : No
Format settings, ReFrames : 1 frame
Codec ID : V_MPEG4/ISO/AVC
Duration : 15 min 10 s
Width : 2 560 pixels
Height : 1 440 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 60.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Writing library : x264 core 157 r2945 72db437
Encoding settings : cabac=0 / ref=1 / deblock=0:0:0 / analyse=0:0 / me=dia / subme=0 / psy=0 / mixed_ref=0 / me_range=16 / chroma_me=1 / trellis=0 / 8x8dct=0 / cqm=0 / deadzone=21,11 / fast_pskip=0 / chroma_qp_offset=0 / threads=12 / lookahead_threads=2 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=0 / weightp=0 / keyint=250 / keyint_min=25 / scenecut=0 / intra_refresh=0 / rc=cqp / mbtree=0 / qp=0
Default : Yes
Forced : No
Color range : Full
Color primaries : BT.709
Transfer characteristics : BT.709
Matrix coefficients : BT.709

Update, 10 June 2020

Not sure yet if it makes sense to push all input videos to the same colorspace, but here’s how to do BT.601 to BT.709 and use the Ut Video editing codec.

ffmpeg -t 10 -i input.mp4 -vf "scale=2560:-2:in_color_matrix=bt601:out_color_matrix
=bt709" -map_metadata -1 -c:v utvideo -c:a copy
output.mkv

Update, 12 June 2020

Extracting the last frame in a video, or specific time offset:

ffmpeg -sseof -3 -i input.mp4 -update 1 -frames:v 1 -q:v 1 output.jpg

ffmpeg -ss 5:00 -i input.mp4 -frames:v 1 -q:v
1 output.png

Selecting and merging video from one file and audio from another, this is good if you only need to rework a small section of audio and don’t want to re-render the entire video:

ffmpeg -i video.mkv -i audio.wav -map 0:v -map 1:a -c:v copy -c:a copy output.mkv

Normalizing the audio using two-pass, use ffmpeg-normalize as follows:

ffmpeg-normalize -o output.mp4 -p -c:a aac -b:a 128K -ar 48000 -e="-color_primaries bt709" -e="-color_trc bt709" -e="-colorspace bt709" -e="-movflags +faststart" input.mkv

This runs the ffmpeg loudnorm filter, figures out the right normalization, applies it, and codes that into AAC at 128Kbit/s.

Update, 20 October 2020

Converting MP4 files to APNG, it’s similar to the GIF conversion and you still want to do a most-used color palette analysis:

ffmpeg -ss 2.5 -i test.mp4 -an -filter_complex "[0:v] palettegen [palette]; [0:v][palette] paletteuse" -r 6 -t 5.5 -plays 0 test.apng

Update, 30 October 2020

Archiving losslessly-captured screencasts:

ffmpeg -benchmark -i input.mkv -crf 0 -preset veryslow -c:a copy -color_primaries bt709 -color_trc bt709 -colorspace bt709 output.mkv

Android SDK via the Command Line

Mucking around with a build / Continuous Integration server on DigitalOcean. Here’s the command to get the Android SDK installed:

./android update sdk --no-ui --filter tools,platform-tools,build-tools-23.0.1,android-23,extra-android-m2repository,extra-android-support,extra-google-m2repository --dry-mode

Which says:

# ./android update sdk --no-ui --filter tools,plattory --dry-mode
Refresh Sources:
  Fetching https://dl.google.com/android/repository/addons_list-2.xml
  Validate XML
  Parse XML
  Fetched Add-ons List successfully
  Refresh Sources
  Fetching URL: https://dl.google.com/android/repository/repository-11.xml
  [...]
Packages selected for install:
- Android SDK Tools, revision 24.4
- Android SDK Platform-tools, revision 23.0.1
- Android SDK Build-tools, revision 23.0.1
- SDK Platform Android 6.0, API 23, revision 1
- Android Support Repository, revision 22
- Android Support Library, revision 23.0.1
- Google Repository, revision 22

Dry mode is on so nothing is actually being installed.

Remove the --dry-mode flag, and it should be good to go.