Embedded C++11 FTW

This is a rant.

Need Moar C++11 In Embedded Software

We really need to embrace the use of C++11 in embedded software development with proper, type-safe, interrupt-driven events and callbacks, and we need to do this as natively as possible.

Since IAR finally released a C++11-compatible compiler this year, it should be the goal to actually make use of C++11 now that all of the “Big 3” (IAR, Keil, and GCC) support it.

(Shame on you, IAR, for taking this long to make this possible.)

I’d really like to see a clean C++ API on embedded chips taking advantage of C++11 features, running as close to the metal as possible.

mbed is close, but I’ve definitely seen many odd behaviors where it wraps the low level APIs provided by vendors and ends up blocking, crashing, or doing other stupid things that aren’t anticipated in the mbed API.

Which leads me to my next point.

Procedural, Embedded C With Blocking APIs Needs To Die

We should stop writing procedural C APIs for microcontrollers. Apple’s IOKit object-oriented driver architecture shows us a better way, and it was invented decades ago, even if not that many people ended up using it.

We should also stop using blocking APIs on as many microcontrollers as possible.

The bare-metal HALs and SDKs that offer these types of code examples should remove them because they tend to encourage bad programming practices, even when an RTOS is in use (because most microcontroller RTOSes don’t tend to properly handle event-driven wakeup on I/O and leave this to the user anyways).

I’ve seen too many examples of code where reading a serial port involves:

  1. Setting up the serial port read.
  2. Running a while() loop.
  3. Waiting until the serial port indicates that a single byte of data is available.
  4. Exiting the loop and reading the byte.
  5. Going back to step 1.

Think about this: In one millisecond, a 12MHz processor can run 12000 instructions, more or less. Let’s say it takes the unskilled program 20 instructions to set up the registers to receive a byte on the serial port and enter a while loop.

The processor will busy-wait for 11980 cycles, consuming 100% power. This is idiotic.

The primary benefit of thinking in events and callbacks is power-related, we really should avoid sleeps that are not real sleeps.

It kills me to see people still using superloops without proper sleep / WFI / WFE calls. Pretty much all Arduino code falls into this category.

My experience with mbed is also that its power consumption is quite atrocious. It insanely does not yet have a tickless scheduler as part of its RTX RTOS codebase, making it essentially useless for battery-powered devices.

Goals Hardware Manufacturers Should Pursue

Every hardware manufacturer has something that slightly odd about its software offerings. I never understood this. NXP has LPCOpen, which is a set of C SDKs for their LPC microcontrollers, but each chip has its own SDK ZIPfile blob which may or may not have a consistent API with other LPC microcontrollers. STM has its STMCube software which is in a similar position. Atmel and Atmel Studio is pretty good, but their chips often seem to be pretty expensive and they just automate away some of these issues via clever IDE wizards. Cypress and TI have decent hardware, but the PSoC C APIs never seemed all that easy to get around and I have no idea who pays for Code Composer Studio.

NXP has a decent set of development tools, STM does not make its own (seriously, WTF), neither does Nordic Semiconductor (who just leaves you to puzzle through their horrendously-architected nRF5 SDK and fight multiple times with various not-well-documented headers just to get printf with RTT working and to struggle with integrating multiple projects together to get the various types of hardware working together).

Silicon Labs has a fantastic development environment, with amazing debug, energy measurement (AEM is amazing), and low-power state management (setup EM4!) but no one really uses them. So sad.

But at the end of the day, I feel like I’m always porting, even between chips in the same family, from the same company.

So here’s what I wish these damned hardware manufacturers would actually do:

  • No more crappy SDKs with slight differences in APIs and implementations per chip.
  • No more wrapping crappy, blocking C APIs.
  • No more always needing to pass a pointer to a C struct as the first parameter of an API call.
  • No more writing delay loops that are while(delay) { NOP() }.
  • No more abusing the C preprocessor to generate or to validate code.
  • No more needing to copy and paste source files with minor modifications to use a second instance of hardware.
  • No more violations of Don’t Repeat Yourself (DRY), where a chip feature is controlled by a preprocessor flag in an sdk_config.h file, and by a statically-defined variable, and by a variable in a configuration struct pointer somewhere. (I’m looking at you, Nordic Semiconductor.)
  • No more writing code without measuring power consumption in your labs.
  • Use decent, modern, C++11 with Dependency Injection and well-architected objects that receive base addresses as parameters from your CMSIS microprocessor definitions to represent the peripherals in your microcontrollers. No more hard-to-change dependencies on your company’s CMSIS definitions, please!

Why is this so hard to get right? Baseline functionality should have me doing less plumbing and more coding on my own application and it is high time, in 2017, to get this right.

The rest of the programming world moved to object orientation years ago, and the resources available on today’s microcontrollers should allow us to place greater value on programmer time and programmer effectiveness.

We need to stop chasing the smallest possible firmware and start chasing the most bang-per-line-of-code and bang-per-minute-of-programmer-time.

Postscript

Whenever I see the code example:

I want to punch someone at Arduino for wasting so much electricity.

Majority-Circuits Are Good

Pretty interesting writeup of a hack on the SWIFT banking system last year: http://baesystemsai.blogspot.de/2016/04/two-bytes-to-951m.html

One thing that is astounding to me is the fact that the SWIFT network seems to rely on human bankers to double check paper receipts. And that the system checks rules on the individual client computers. Endpoint nodes, in other words.

I would have expected that given the volumes of money involved, there would be majority-circuit systems in place: i.e. one Windows machine, one Linux machine, one Mac machine, which all have to validate transactions using identical input. Inconsistencies in output would then indicate whether some kind of hack was occurring on one of the three systems.

This would prevent any single point of failure from causing invalid transfers to occur and it would mean anyone wanting to crack your system would have to find 0-day vulnerabilities on at least two heterogeneous machines.

On aircraft, this kind of multiple redundancy means that many critical electronics units have two or possibly three systems processing the same information. Sometimes this also means multiple power supplies, multiple AFDX networking switches, even potentially using two different CPU architectures and two different compilers to guarantee that bugs can be identified and mitigated in these foundational pieces.

Surely, I would hope money transfer systems have this kind of multi-layer defense-in-depth?

Goddamnit, Microsoft; Goddamnit, Realtek

Next time you do a goddamn driver update, don’t fuck with my microphone settings such that my Mom can’t hear me anymore when we Skype.

I didn’t touch anything in the sound settings, but somehow after this driver update:

She can’t hear me anymore. AWESOME.

Realtek, fix your goddamn settings too:

Beamforming doesn’t work, Acoustic Echo Cancellation doesn’t work, Keystroke Suppression doesn’t work. All of these things just turn the volume down and make it impossible for the other side to hear me.

How do I know? Because when I call my Mom on Skype, she can’t hear me.

Do you actually test your software? Like actually sit people down and have them Skype with your default settings?

Can you imagine the number of people who suddenly could not talk to their loved ones because of your boneheaded update?

The only way I managed to make the volume on the other side comprehensible was to disable all of this extra crap, which I had already previously disabled.