Apple vs. Microsoft Strategy

Apple pulled a huge surprise out of the hat yesterday, by launching the latest version of their desktop operating system as a free upgrade for anyone owning a system from ~2007 forward. This is a brilliant move.

  1. It makes their hardware more sticky, by extending the life of older systems and generating goodwill among existing users. The memory compression feature and battery life improvements mean that even older systems will continue to be usable for a few more years.
  2. It increases the second-hand value of older systems, as they can run a top-notch, supported operating system and gain access to all the new development that Apple’s been paying for. Compared to the scattered development lifecycle and the utter impossibility of an easy and cheap OS upgrade on the Microsoft side, the Apple strategy pays off immensely.
  3. It moves all of their customers from 3 previous operating system generations (10.6, 10.7, 10.8) to the latest generation (10.9) and removes any disincentive to upgrade. Apple also adds the last holdout customers to the App Store ecosystem. The revenue from content and media sales cover the costs of operating system development. Microsoft needs to ask itself: Is it worth trying to squeeze a large amount of cash out of OEMs and end users every few years, or capture small amounts of cash out of end users many times per year?
  4. It allows Apple to focus solely on improvements to the current operating system codebase, once the majority of users are onboard, which will be very fast. With no need to waste resources on supporting older OS X versions, Apple gets to shuffle and to devote more resources to new development. It means more features reaching more customers. Which is something they need in order to stay abreast of Google.

By supporting older systems, Apple confirms to some extent that desktop computers have stopped requiring more powerful processors and graphics for the past 6 years, while also repudiating the Microsoft demand that users upgrade their hardware with every new Windows release. It sends a signal to potential buyers (and people looking to switch from Windows) that they can amortize the higher initial hardware cost across several years of low-cost (and now, free) operating system updates. In concrete terms, it means you can max out a 2007 iMac with $150 of today’s RAM and solid-state disk component upgrades (improving it even beyond the original top-of-the-line 2007 specs), and easily get years more life out of the system.

Apple made the same hardware upgrade demands on early iPhone adopters, but smartphone specifications are also starting to plateau. So it may be the case that in 2016, when iOS X (?!) is released, the operating system will reach back and support the iPhone 5S as well.

Or, more likely, smartphones and desktops will converge within the next few generations. This is probably the only thing that will keep the hardware evolving and actually require new hardware purchases, since the current hardware can’t quite do everything that would be necessary to support this convergence.

There are a few moves Apple is already making to support this in advance:

  1. Apple is now making their productivity applications available for free when you buy their hardware. Combined with iCloud support, it means being able to seamlessly work through multiple hardware devices. And it gives them a few years’ lead time to work out any remaining issues.
  2. iOS 7 added support for Bluetooth mice, meaning that precision work will be possible in certain apps, certainly in future desktop-style apps.
  3. iTunes streaming already allows you to stream movies to a connected Apple TV device, but one could just as easily imagine a way of doing this with a computer monitor. Indeed, one of the new features listed in OS X Mavericks is to stream the desktop to an Apple TV device.

Apple knows it needs to start moving in this direction, and that if it doesn’t, Google and Amazon probably will.

The current overall strategy that Apple seems to be signaling is the utter stability of the ecosystem: that what you buy today will run the content that you bought yesterday and that you will buy tomorrow, that Apple will retain current customers and gain new customers by offering them smooth, pain-free upgrades, and that investments in expensive hardware can be considered viable for the longer term.

Microsoft

On the Microsoft side of things: Is Ballmer gone yet?

Because Microsoft really needed new management years ago. But more to the point, because by April 8, 2014, something new will happen that will make Microsoft feel even worse.

As Windows 8.1 begins making its way onto desktops around the world, it might be useful to take a look at the approaching train wreck that is the end-of-life for Windows XP.

It will be a free-for-all on two fronts.

First, malware and spyware developers are going to rush to find more exploits, that they know will never be patched. Whoever captures the first zero-day exploits (and the many exploits past that point), will have essentially unlimited capacity to set up botnets. There will be very little that antivirus software providers will be able to do, once these exploits make it into the wild. These bits of malware represent the fatal blow to the Windows XP platform, and it’s unlikely Microsoft will do anything to stop them. But without providing a viable upgrade path, Microsoft allows the value of those existing systems and existing sales relationships to plummet to zero. Not a great strategy, where Apple is now signaling that it is trying to preserve the value of existing systems by making them as modern and secure as possible, while continuing to grow its installed base.

Second, IT departments will have to consider replacing these insecure systems. Will they choose Microsoft this time around, knowing that the total cost of ownership of a Mac system (factoring in hardware and operating system upgrades, less hardware diversity to manage, security updates, etc.) is within striking distance? In addition, when Windows 8 behaves as oddly as it does, with fullscreen Modern UI apps competing with traditional windowed apps to annoy the user, IT might decide that if they’re going to have to retrain users anyway, they might be willing to give the Mac a try.

There’s a blog post at the AMD community website called “The End of Windows XP: An Opportunity for PC Resurgence” that makes the argument that the loss of support for Windows XP for “its approximately 500 million users” might provide a “much-needed uptick for PCs”. But my feeling is that that’s 500 million desktops that are theirs to lose to other form factors. The usage patterns, the way people interact with data, have changed significantly over the past decade and Microsoft’s attempts to retain either a hardware or software lock on the market have been pretty unimpressive. (Except for Excel, of course.)

Microsoft needs to consider:

  1. Dropping support for native 32-bit environments entirely going forward, relying instead on the Windows-on-Windows 64 (WOW64) software layer to run older applications in a native 64-bit environment. It not as though this is new technology, it’s been field proven over the past 10 years already, so the idea of supporting a pure 32-bit environment is entirely anachronistic and pointless.
  2. Dropping the distinction between the different licenses. Rolling all of the features into a single codebase would make more sense and would allow developers to concentrate on improving that codebase. The only real distinction to be made between Windows versions is between the additional multiple-system management tools available to home and corporate customers.
  3. Validating a version of Windows 8 with reduced hardware requirements for machines that once ran XP, and pricing the licenses accordingly. This hurts new PC shipments from its OEM partners, which Microsoft is loathe to do, but the decreasing volume of PC shipments means that ship has likely sailed anyway and Microsoft will need to find a way to generate incremental revenues from existing PC users looking to upgrade. As Apple just proved, hardware requirements are not the issue here.
  4. Dropping support for Windows RT and ARM. Seriously, it makes no sense to bifurcate the platform. Instead, work with Intel’s latest Atom processors, running in 64-bit mode (so you can compete with iOS 7 with a single codebase), and optimize heavily to solve the battery-life issues. (I just ran into another post today about Windows having terrible battery life.)
  5. Investing in kernel efficiency improvements and software optimization from the lowest levels of Windows (MinWin) to the topmost, user-facing features. Efficiency is an area in Windows that has been sorely lacking for years, and which is killing their efforts to build usable tablet computers and other alternative form-factor devices. There need to be platform-wide bounties and goals for improving energy efficiency, I/O performance, networking performance, security, and so on.
  6. Open-sourcing the Internet Explorer core technologies. Browser development is charity work these days, and it’s not even Microsoft’s core business, so what is the point of singularly shouldering the development costs? It would be to their own benefit to let independent programmers review the source code looking for ways to improve it.

Not making these kinds of moves means that Microsoft will be stuck supporting a bloated ecosystem full of duplicated effort, with minimal resources dedicated to improving their situation. Sound familiar? It might, because that’s essentially the way the Linux ecosystem has been run for years. This is not an ecosystem they should want to emulate.

The current overall strategy that Microsoft seems to be signaling is the utter instability of their ecosystem: that what you buy today will not run the content that you bought yesterday and that you will buy tomorrow (Windows Phone 7.2, PlaysForSure, anyone?), that the way you expect to interact with your computer will also utterly change underneath you, that Microsoft will neither retain current customers nor gain new customers by offering them smooth, pain-free upgrades, and that even investments in inexpensive hardware cannot be considered viable for the longer term because Windows upgrades are so costly that no one bothers.

If Microsoft wants to remain relevant, they need to start thinking up ways to put their best work in front of the most people, all of the time, as part of the package, as a benefit of membership in the Windows ecosystem, as it were. Because this is the world we now live in: Apple and Google push their latest developments to consumers as soon as possible and at no additional cost, while Microsoft sits on its hands, hoping they will both go away.

ndk-configure.sh

A way to run GNU autoconf configure scripts using the Android NDK platform headers and libraries, improving on this:

References
1. http://stackoverflow.com/questions/5908581/is-hash-map-part-of-the-stl
2. http://protobuf.googlecode.com/svn/trunk/m4/stl_hash.m4
3. http://stackoverflow.com/questions/9201521/g-4-6-issue-no-bits-cconfig-h-file-as-required-by-the-header-cstring

Ubuntu 13.10 chroot on Android

Summary

Old smartphones make the perfect microservers, they’ve got more than enough horsepower for server tasks, plenty of RAM, and once rooted, full configurability. They’re durable embedded systems, honed to perfection by being produced for the mass-market, with no moving parts and the possibility of decent storage expansion. They’re extremely resource-light, usually utilizing only a few watt-hours per hour, meaning they’ve got battery-backup built right in.

Ubuntu 13.10 on an old smartphone

I had some problems getting various chroot instructions to work completely smoothly on my phone. This writeup is a synthesis of instructions from the references listed at the bottom of this post.

I used gparted to create a clean ext3-formatted Linux partition on an 8 gigabyte SD card. I mounted that partition on the host machine under /mnt and ran the debootstrap command:

HOSTNAME=ubuntu-armhf sudo debootstrap --arch=armhf --variant=minbase --foreign saucy /mnt

Then, I installed the rest of the system in emulation:

sudo apt-get install qemu-user-static qemu-system
sudo cp /usr/bin/qemu-arm-static /mnt/usr/bin
LANG=C sudo chroot mnt /usr/bin/qemu-arm-static -cpu cortex-a9 /bin/bash ./debootstrap/debootstrap --second-stage

After putting the SD card back into the smartphone, Android complained that the card was messed up:

Damaged SD Card

Which was fixable by logging into the phone’s shell as root and running e2fsck, which checked and fixed filesystem issues it detected on the card:

localhost block # e2fsck /dev/block/mmcblk0p1
e2fsck 1.41.12 (17-May-2010)
chroot-drive contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
chroot-drive: 8180/489600 files (1.2% non-contiguous), 100332/1957120 blocks
localhost block # e2fsck /dev/block/mmcblk0p1
e2fsck 1.41.12 (17-May-2010)
chroot-drive: clean, 8180/489600 files, 100332/1957120 blocks

After doing this once, the Android system checked the SD card again at the next reboot, and added the following mounts to the /proc/mounts table after each subsequent reboot:

/dev/block/vold/179:1 on /mnt/sdcard type ext4 (rw,dirsync,nosuid,nodev,noexec,relatime,barrier=1,data=ordered,noauto_da_alloc)
/dev/block/vold/179:1 on /mnt/secure/asec type ext4 (rw,dirsync,nosuid,nodev,noexec,relatime,barrier=1,data=ordered,noauto_da_alloc)
tmpfs on /mnt/sdcard/.android_secure type tmpfs (ro,relatime,size=0k,mode=000)

Then I set up a script to log into the chroot environment:

This worked fine, but when running inside the chroot environment the apt-get and apt-key commands didn’t work, because the Domain Name Servers were not specified. (Note: The mount command in the for loop probably also needs some work, as it will mount things multiple times when I leave and reenter the chroot environment.)

I edited the /etc/resolv.conf file and pointed it to Google’s nameservers:

The sources.list file uses ports.ubuntu.com, which might be because the other archive servers don’t have armhf packages to offer. It wasn’t quite clear to me how Ubuntu manages their ports across the various versions.

Once the APT source has been added, apt-get update and apt-get install will work fine. It’s a blank slate from there.

squid-deb-proxy-client

For the initial downloads, the following setting can be placed in the APT configuration folder to manually specify an APT package proxy, which might speed up installations:

 
# vi /mnt/sdcard/etc/apt/apt.conf.d/30proxy
Acquire::http { Proxy "http://192.168.0.132:8000"; };

If the proxy address never changes, you can just leave it at that. If the address does change, you’ll probably want to install the squid-deb-proxy-client package.

References

1. ChrootOnAndroid
2. How to install Ubuntu 13.04 on your Android phone
3. DebootstrapChroot
4. Raspbian Benchmarking – armel vs armhf
5. Installation mit debootstrap
6. Index of /dists/saucy/main
7. Best way to cache apt downloads on a LAN?

C++11 support on Android with g++ 4.8

Summary

The C++11 support on Android when using g++ 4.8 is pretty good. The important concurrency features that were missing from g++ 4.6 are now available in the Standard Template Library.

Testing libgnustl_shared.so

After examining the LLVM libc++ library on Android, I wanted to check which C++11 features are currently available using g++ 4.8 and the GNU Standard Template Library (libstdc++). g++ 4.8 improved support for C++11 significantly (4.6 vs. 4.8), so the question was whether that support also made it into the Android-specific compilers. With the LLVM libc++ library, a lot of very specific platform code such as support for atomic operations was stubbed out, which would cascade down into problems with the mutual exclusion mechanisms and threads.

Based on a review of an existing codebase, the features most interesting to me were contained in the following headers: atomic, chrono, condition_variable (condition_variable), memory (shared_ptr, unique_ptr), mutex (lock_guard, mutex, unique_lock), thread (thread). Also of interest was compiler support for lambdas and auto keyword support.

So let’s see what’s available, by modifying the android-ndk-r9/samples/test-libstdc++ native executable to utilize a number of C++11 features to perform the same race condition-prone task. Namely, the program generates a random number of threads which each attempt to simultaneously increment some global variable by one, one hundred times.

C++ gives you a few ways to solve the problem, and I’ve tried each of them here. The logic of the incrementation isn’t exactly identical, and in some cases incredibly inefficient (the conditional_variable case basically has all threads waiting for a single thread to complete, before they can acquire a shared resource mutex and begin executing). In any case, this is more a proof of library completeness than a proof of algorithmic efficiency.

The test source code looks like this:

I added an Application.mk file to the jni folder. Debugging is activated, so that the assert() macro isn’t disabled.

APP_ABI    := armeabi
APP_STL    := gnustl_shared
APP_CFLAGS := --std=c++11
APP_OPTIM  := debug

NDK_TOOLCHAIN_VERSION := 4.8

Building works fine:

$ ndk-build
Compile++ thumb  : test-libstl /sources/cxx-stl/gnu-libstdc++/4.8/libs/armeabi/
Executable     : test-libstl
Install        : test-libstl => libs/armeabi/test-libstl
Install        : libgnustl_shared.so => libs/armeabi/libgnustl_shared.so
$ adb push libs/armeabi/libgnustl_shared.so /data/tmp
366 KB/s (804484 bytes in 2.143s)
$ adb push libs/armeabi/test-libstl /data/tmp
284 KB/s (50536 bytes in 0.173s)

After pushing the test-libstl executable to the device, make sure the /data/tmp folder is added to the LD_LIBRARY_PATH:

localhost tmp # LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/tmp

Then just run the executable, and you should see:

localhost tmp # ./test-libstl 
std::atomic + std::random + std::thread + auto + ranged-for test
  -- std::random rolled: 11
  -- atomicInt final value: 1100
std::condition_variable + std::mutex + std::unique_lock + std::random + std::thread + lambda test
  -- std::random rolled: 15
  -- nonAtomicInt final value: 1500
std::lock_guard + std::mutex + std::random + std::thread + lambda test
  -- std::random rolled: 7
  -- lockGuardInt final value: 700
std::unique_ptr + std::shared_ptr + std::make_shared tests
  --  PtrTest(): std::unique_ptr
  --  PtrTest(): std::shared_ptr
  -- ~PtrTest(): std::shared_ptr
  -- ~PtrTest(): std::unique_ptr

To run the tests, I used a rooted phone running Android 2.3.7.

In real deployments where you’re building a loadable shared-library using the Java Native Interface, you’ll need to follow the instructions from the NDK’s CPLUSPLUS-SUPPORT.html file and call System.loadLibrary on libgnustl_shared.so before loading the eventual shared library form of your app. (Managing standalone executable lifecycles tends to be a pain in the ass inside of Android apps and services, at least based on the experiences we had at my previous company.)

Overall, I’m pretty positive about the support for concurrency in g++ 4.8 on Android, and am thrilled to see that it may now be possible to write a single modern C++ codebase across all of the major desktop and mobile platforms.

Mucking around with libc++ on Android

tl;dr

libc++ on Android doesn’t work yet as of October 2013.

Getting the test suite running

The LLVM project-created libc++ library isn’t yet fully supported on Android, but I wanted to see how much of its test suite would currently work on an Android phone. Considering the weaknesses of all the other C++ Standard Template Libraries out there, libc++ becoming available would be a huge boon to developing a modern codebase. There are a few notes about this in one long-running feature request, and according to the git logs there are maybe 2 people actually working full-time on the NDK at Google, but there’s not much else to note.

Easier said than done, as usual, and it’s not really in Google’s interest to get this working, since everyone is supposed to write things specificially in Java. But it is the one stumbling block that prevents a company from writing their core logic a single time in modern C++11 (with conveniences like support for atomics, threads, conditional variables, and mutexes) and having that run across all existing desktop and mobile platforms. I’ll have to take a look at whether the g++ 4.8 STL support is complete enough to cover the C++11 cases I want to use. Everyone else (Microsoft, Apple) has a usable STL implementation on their desktop and mobile platforms. A functioning libc++ might also let people avoid pulling in the entirety of a library like Qt to plug the gaps, and as it is Qt is in an alpha stage of support for Android and iOS.

Anyway, these are some of the notes I took while getting the test suite going on a rooted smartphone.

Clone the latest Google NDK repository sources in $HOME:

git clone https://android.googlesource.com/platform/ndk git-ndk

Grab a copy of the prepackaged NDK from the Android developer website, at the time of this writing version r9. Unpack this in $HOME.

Create a symbolic link from sources/cxx-stl/llvm-libc++ folder in the Git checkout to the sources/cxx-stl folder in the officially packaged NDK folder. This saves you from having to build the cross-compilers yourself.

ln -s git-ndk/sources/cxx-stl/llvm-libc++ android-ndk-r9/sources/cxx-stl/llvm-libc++

Edit the android-ndk-r9/sources/cxx-stl/llvm-libc++/test/../../gabi++/include/gabixx_config.h file to use the __typeof__ operator instead of typeof, because of Problems 3 and 4 (detailed below) encountered in --std=c++11 mode:

After making this change, go to the android-ndk-r9/tests/device/test-libc++ folder and try building the test libc++ executable, but limit the build to armeabi, otherwise the ndk-build will try building 8 versions of the libc++ libraries (static, shared) x (armeabi, armeabi-v7a, mips, x86):

android-ndk-r9/tests/device/test-libc++$ ndk-build APP_ABI=armeabi

The code compiles, but there’s still a ton of stuff in the library not yet implemented, and you’ll see a lot of #warning Not implemented messages, wherever functionality has been stubbed out.

[...]
Compile++ thumb  : c++_static  libs/armeabi/test_1_static

OK, so the library builds, now to get the testit_android test runner running in android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test:

Set up the libc++ library include path to look like:

The sysroot setting needs to be fixed up in the testit_android script, in this case I set it to the android-9 platform, which reaches back to roughly the Android 2.2 release:

Add ANDROID_SYSROOT to the two compilation commands:

Now connect a rooted Android phone to your development computer.

Running testit_android should work, first pushing a copy of the libc++ shared library to the smartphone, then compiling, pushing, and running the test executables on the smartphone:

android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test$ ./testit_android --abi=armeabi
passed 1 tests in /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test
passed 1 tests in /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/algorithms
passed 1 tests in /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/algorithms/alg.c.library
passed 1 tests in /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/algorithms/alg.modifying.operations
[...]

Unfortunately, after running several thousand tests for over an hour, testit_android seemed to hang when attempting to run tests within the support.dynamic test suite:

[...]
failed 1 tests in /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/language.support/support.dynamic/alloc.errors/set.new.handler
passed 1 tests in /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/language.support/support.dynamic/alloc.errors/set.new.handler

real	68m19.512s
user	0m0.004s
sys	0m0.000s

I removed the llvm-libc++/libcxx/test/support.dynamic test suite and reran testit_android, which resulted in the full suite completing:

****************************************************
Results for /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test:
using arm-linux-androideabi-g++ (GCC) 4.8
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
with -std=c++11 -I/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/support -I/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/include -I/home/builder/android-ndk-r9/sources/android/support/include -L/home/builder/android-ndk-r9/tests/device/test-libc++/obj/local/armeabi
----------------------------------------------------
sections without tests   : 0
sections with failures   : 199
sections without failures: 861
                       +   ----
total number of sections : 1060
----------------------------------------------------
number of tests failed   : 683
number of tests passed   : 3716
                       +   ----
total number of tests    : 4399
****************************************************

real	141m23.980s
user	67m18.716s
sys	21m46.932s

As usual, your mileage may vary, but this is the state of libc++ on Android in October 2013.

Notes

1. How about using Clang, since it should work better?

According to the NDK page, you can specify the compiler to use via the NDK_TOOLCHAIN_VERSION variable:

“Fixed the ndk-build script to enable you to specify a version of Clang as a command line option (e.g., NDK_TOOLCHAIN_VERSION=clang3.2). Previously, only specifying the version as an environment variable worked.”

So this works too:

ndk-build NDK_TOOLCHAIN_VERSION=clang3.3 APP_ABI=armeabi

And produces output like:

[...]
Compile++ thumb  : c++_shared 

The problem is that the testit_android script is currently hard-coded to use arm-linux-androideabi-g++ to build the test suite executables. The LLVM compiler that ships with Android NDK r9 seems to be an all-in-one cross-compiler, which can generate code for ARM, MIPS, and x86 depending on which config.mk files are supplied to configure the current build. So getting testit_android to build ARM executables with Clang would be a bit more work, though probably not that much more work.

2. Relation between the Android.mk files currently

android-ndk-r9/tests/device/test-libc++/jni/Android.mk (immediately imports test executable makefile) -> 
android-ndk-r9/sources/cxx-stl/llvm-libc++/test/Android.mk (builds the test executables) -> 
android-ndk-r9/sources/cxx-stl/llvm-libc++/Android.mk (builds the actual libc++ static and shared libraries)

Problems encountered

1. Attempting to build the tests in the Git working directory (git-ndk) without first creating the symbolic link in the release NDK directory (android-ndk-r9)

git-ndk/tests/device/test-libc++$ ndk-build
jni/Android.mk:1: /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/test/Android.mk: No such file or directory
jni/Android.mk:1: /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/test/Android.mk: No such file or directory
jni/Android.mk:1: /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/test/Android.mk: No such file or directory
jni/Android.mk:1: /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/test/Android.mk: No such file or directory
/home/builder/android-ndk-r9/build/core/build-all.mk:89: Android NDK: WARNING: There are no modules to build in this project!    
make: *** No rule to make target `/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/test/Android.mk'.  Stop.

2. Attempting to build the tests by specifying NDK_ROOT with ndk-build on the PATH

git-ndk/tests/device/test-libc++$ ndk-build NDK_ROOT=$HOME/git-ndk
Android NDK: Could not find platform files (headers and libraries)    
Android NDK: Please run build/tools/build-platforms.sh to build the corresponding directory.    
/home/builder/git-ndk/build/core/init.mk:471: *** Android NDK: Aborting    .  Stop.

Problems 1 and 2 are solved by creating the symbolic link as described above and running the ndk-build command from within the android-ndk-r9 folder structure.

3. Attempting to build with GNU g++ fails due to the use in GAbi++ of a GNU-specific 'typeof' keyword which isn't recognized in --std=c++11 mode

Compile++ thumb  : c++_shared 

4. Attempting to build with Clang 3.3 fails due to the use in GAbi++ of a GNU-specific 'typeof' keyword which isn't recognized in --std=c++11 mode

Compile++ thumb  : c++_shared 

typeof is a GNU compiler-specific keyword extension to the C standard, meaning that other compilers won't like it very much. Darn it. The compilation commands for g++ and clang++ use --std=c++11, which does not allow the use of the typeof keyword.

However, according to the Clang user manual:

The parser recognizes "asm" and "typeof" as keywords in gnu* modes; the variants "__asm__" and "__typeof__" are recognized in all modes.

The same appears to be true of the GNU compiler, therefore replacing typeof with __typeof__ fixes the problem on both.

Running testit_android

1. Missing prebuilt library

android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test$ ./testit_android --abi=armeabi
ERROR: Missing prebuilt library: /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libs/armeabi/libc++_static.a
Please run: build/tools/build-cxx-stl.sh --stl=libc++

But the library is already built here:

android-ndk-r9/sources/cxx-stl/llvm-libc++/libs/armeabi/libc++_static.a

So, the testit_android script just needs to be modified, where the line:

LIBCXX_LIBS=$(cd $LIBCXX_ROOT/.. && pwd)/libs/$TARGET_ABI

Needs to be changed to:

LIBCXX_LIBS=$(cd $LIBCXX_ROOT/../../../.. && pwd)/tests/device/test-libc++/obj/local/$TARGET_ABI

2. Missing C++ compiler

./testit_android --abi=armeabi
ERROR: Missing C++ compiler: arm-linux-androideabi-g++

Now, put that compiler into the PATH:

export PATH=$HOME/android-ndk-r9/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin:$PATH

3. Missing platform headers and libraries

The build command doesn't know which Android platform to compile and link against:

/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/containers/container.adaptors/queue/queue.defn/assign_move.pass.cpp failed to compile
Compile line was: ( cd /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/containers/container.adaptors/queue/queue.defn && arm-linux-androideabi-g++ -std=c++11 -I/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/support -I/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/include -I/home/builder/android-ndk-r9/sources/android/support/include -L/home/builder/android-ndk-r9/tests/device/test-libc++/obj/local/armeabi assign_move.pass.cpp -lc++_shared )
android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test$ cd /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/containers/container.adaptors/queue/queue.defn && arm-linux-androideabi-g++ -std=c++11 -I/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/support -I/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/include -I/home/builder/android-ndk-r9/sources/android/support/include -L/home/builder/android-ndk-r9/tests/device/test-libc++/obj/local/armeabi assign_move.pass.cpp -lc++_shared
In file included from /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/include/queue:168:0,
                 from assign_move.pass.cpp:14:
/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/include/__config:99:21: fatal error: endian.h: No such file or directory
 # include <endian.h>
                     ^
compilation terminated.
android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test$ cd /home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/algorithms/alg.modifying.operations/alg.copy && arm-linux-androideabi-g++ -std=c++11 -I/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/test/support -I/home/builder/android-ndk-r9/sources/cxx-stl/llvm-libc++/libcxx/include -I/home/builder/android-ndk-r9/sources/android/support/include -I/home/builder/android-ndk-r9/sources/android/support/../../../platforms/android-9/arch-arm/usr/include -L/home/builder/android-ndk-r9/tests/device/test-libc++/obj/local/armeabi copy_n.pass.cpp -lc++_shared
/home/builder/android-ndk-r9/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: cannot open crtbegin_dynamic.o: No such file or directory
/home/builder/android-ndk-r9/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: cannot open crtend_android.o: No such file or directory
/home/builder/android-ndk-r9/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: cannot find -lstdc++
/home/builder/android-ndk-r9/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: cannot find -lm
/home/builder/android-ndk-r9/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: cannot find -lc
/home/builder/android-ndk-r9/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/4.8/../../../../arm-linux-androideabi/bin/ld: error: cannot find -ldl
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test, output_iterator >(): error: undefined reference to 'memset'
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test, output_iterator >(): error: undefined reference to '__assert2'
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test, output_iterator >(): error: undefined reference to '__assert2'
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test, input_iterator >(): error: undefined reference to 'memset'
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test, input_iterator >(): error: undefined reference to '__assert2'
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test, input_iterator >(): error: undefined reference to '__assert2'
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test, forward_iterator >(): error: undefined reference to 'memset'
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test, bidirectional_iterator >(): error: undefined reference to 'memset'
/tmp/cc55fvWI.o:copy_n.pass.cpp:function void test(): error: undefined reference to 'memmove'
collect2: error: ld returned 1 exit status

As specified above, this problem is fixed by the addition of the ANDROID_SYSROOT variable to the build command, to indicate which Android platform includes and libraries the compiler should use.