Google App Engine WTFs

“Bad argument: Multiple entries with same key”

If you get the error "Bad argument: Multiple entries with same key" when you’re trying to get modules working in Google App Engine, it’s because you’ve either:

  1. Forgotten to include a <module> tag in more than one of your appengine-web.xml files, meaning that now you (implicitly) have more than one module named “default”.
  2. Named more than one of your modules the same name (probably a copy / paste mistake after duplicating a WAR folder).

Unfortunately, the error message from AppCfg doesn’t help at all to realize this.


Another annoying error is the ClassNotFoundException when you forget to add a package statement to your Servlet source code file:

W 2014-03-26 19:26:23.632 EXCEPTION java.lang.ClassNotFoundException: com.test.SomeServlet at
E 2014-03-26 19:26:23.634 javax.servlet.ServletContext log: unavailable javax.servlet.UnavailableException: com.test.SomeServlet at org.mortbay.jetty.servlet.Holde
W 2014-03-26 19:26:23.709 Failed startup of context{/,/base/data/home/apps/s~sometest/someservlet:1.
C 2014-03-26 19:26:23.713 Uncaught exception from servlet javax.servlet.UnavailableException: Initialization failed. at
I 2014-03-26 19:26:23.715 This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This requ
E 2014-03-26 19:26:23.715 Process terminated because it failed to respond to the start request with an HTTP status code of 200-299 or 404.

Failure to do so means that the CLASSPATH will be correct, but your class idiotically won’t be found even though its path in the .ear file will appear to be correct. This should be easily detected by any decent IDE, but in my case I didn’t pick up the mistake as I was using emacs and mvn directly, which didn’t have the smarts to know what was going on.

Update 1: “WARNING: /_ah/start: javax.servlet.UnavailableException: java.lang.IllegalAccessException: Class org.mortbay.jetty.servlet.Holder can not access a member of class”

It means that the Servlet class you defined for a particular module is not “public”. Oops.

400 Bad Request: Invalid runtime or the current user is not authorized to use it.

You may tear out your hair if you see the following messages. Make sure to do mvn install before doing mvn appengine:update in the EAR folder. This kind of bullshit makes me hate Maven.

Grails, Groovy, Scala, and Akka

The interop between these software components is somewhat unexplored territory, but they seem to work alright together. Once you figure out what the Grails conventions are for calling into Scala and where to put the Akka application.conf file (right next to the file in the same directory), everything starts up just fine.

The Grails dependency injection mechanism for getting references to Services doesn’t work with the way of doing things I’m about to describe. I may need to revisit this in the future, if the convenience of having that becomes greater than keeping everything actor-related in Scala. I have no desire to wrap Akka Actors in Groovy and will eventually need to create a common Java interface that Scala can use to call into the Groovy side, and I will need to pass a reference for that to the Scala instance. But that comes later.

To get the various pieces working together, the Akka dependencies and the Scala plugin for Grails need to be added to the BuildConfig.groovy:

BootStrap.groovy looks something like:

AkkaClusterService.scala looks something like:

application.conf looks something like:

Now as long as you have a seed node in your cluster application listening on, the cluster setup should work just fine when you fire off grails run-app.

Starting AkkaClusterService.
[INFO] [03/25/2014 12:10:55.964] [localhost-startStop-1] [Remoting] Starting remoting
[INFO] [03/25/2014 12:10:56.379] [localhost-startStop-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://AkkaCluster@]
[INFO] [03/25/2014 12:10:56.412] [localhost-startStop-1] [Cluster(akka://AkkaCluster)] Cluster Node [akka.tcp://AkkaCluster@] - Starting up...
[INFO] [03/25/2014 12:10:56.502] [localhost-startStop-1] [Cluster(akka://AkkaCluster)] Cluster Node [akka.tcp://AkkaCluster@] - Registered cluster JMX MBean [akka:type=Cluster]
[INFO] [03/25/2014 12:10:56.502] [localhost-startStop-1] [Cluster(akka://AkkaCluster)] Cluster Node [akka.tcp://AkkaCluster@] - Started up successfully
[INFO] [03/25/2014 12:10:56.551] [] [Cluster(akka://AkkaCluster)] Cluster Node [akka.tcp://AkkaCluster@] - Metrics will be retreived from MBeans, and may be incorrect on some platforms. To increase metric accuracy add the 'sigar.jar' to the classpath and the appropriate platform-specific native libary to 'java.library.path'. Reason: java.lang.ClassNotFoundException: org.hyperic.sigar.Sigar
[INFO] [03/25/2014 12:10:56.558] [] [Cluster(akka://AkkaCluster)] Cluster Node [akka.tcp://AkkaCluster@] - Metrics collection has started successfully
| Server running. Browse to http://localhost:8080/grailshelloworld
| Application loaded in interactive mode. Type 'stop-app' to shutdown.
| Enter a script name to run. Use TAB for completion:

If it worked, you’ll see something like:

[INFO] [03/25/2014 12:51:40.910] [] [Cluster(akka://AkkaCluster)] Cluster Node [akka.tcp://AkkaCluster@] - Welcome from [akka.tcp://AkkaCluster@]
[INFO] [03/25/2014 12:51:40.948] [] [akka.tcp://AkkaCluster@$a] Member is Up: akka.tcp://AkkaCluster@
[INFO] [03/25/2014 12:51:41.108] [] [akka.tcp://AkkaCluster@$a] Member is Up: akka.tcp://AkkaCluster@

If it didn’t work, you’ll see something like:

[WARN] [03/25/2014 12:10:57.912] [AkkaCluster-akka.remote.default-remote-dispatcher-7] [akka.tcp://AkkaCluster@] Association with remote system [akka.tcp://AkkaCluster@] has failed, address is now gated for [5000] ms. Reason is: [Association failed with [akka.tcp://AkkaCluster@]].
[INFO] [03/25/2014 12:10:57.929] [] [akka://AkkaCluster/deadLetters] Message [akka.cluster.InternalClusterAction$InitJoin$] from Actor[akka://AkkaCluster/system/cluster/core/daemon/joinSeedNodeProcess#877024691] to Actor[akka://AkkaCluster/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [03/25/2014 12:10:57.929] [] [akka://AkkaCluster/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FAkkaCluster%40127.0.0.1%3A2551-0/endpointWriter] Message [$Timer] from Actor[akka://AkkaCluster/deadLetters] to Actor[akka://AkkaCluster/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FAkkaCluster%40127.0.0.1%3A2551-0/endpointWriter#1345637125] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

That’s it. It should now be possible to define case classes in Scala and to pass them along w/o any need to write Groovy adapters. Or, if you’re feeling adventurous, write Groovy classes using the @Scalify annotation to pass to Scala.

Wash, rinse, and repeat.

Setting up Groovy Environment Manager on Windows

If you have msysgit set up on your Windows machine, then you have the two the requirements to install GVM: bash and curl. Unfortunately, the MinGW environment provided by Git doesn’t completely work. GVM installs fine, but when you go to install Groovy or Grails, you get an error like:

$ gvm install groovy

Downloading: groovy 2.2.2

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 28.3M  100 28.3M    0     0  2861k      0  0:00:10  0:00:10 --:--:-- 3033k

Installing: groovy 2.2.2
Done installing!

Do you want groovy 2.2.2 to be set as default? (Y/n): y

Setting groovy 2.2.2 as default.
ln: creating symbolic link /c/Users/Test/.gvm/groovy/current' to /c/Users/Test/.gvm/groovy/2.2.2': Permission denied

And for Grails, it’s the same:

Setting grails 2.3.7 as default.
ln: creating symbolic link /c/Users/Test/.gvm/grails/current' to /c/Users/Test/.gvm/grails/2.3.7': Permission denied

The files are in place, they just need to be properly symlinked. The PATH is ready to go, thanks to the file that is sourced by GVM.

On Windows, just crack open a normal Command Prompt window and do:

C:\Users\Test>mklink /j .gvm\groovy\current .gvm\groovy\2.2.2
Junction created for .gvm\groovy\current > .gvm\groovy\2.2.2

C:\Users\Test>mklink /j .gvm\grails\current .gvm\grails\2.3.7
Junction created for .gvm\grails\current > .gvm\grails\2.3.7

Creating a Directory Junction is functionally equivalent to creating a symbolic link and easier to use, because Windows 7 Home Premium constantly complains if I try to do the latter (fixing this would require messing with the security policy, which is a pain in the ass for something that should Just Work):

C:\Users\Test>mklink /d .gvm\grails\current .gvm\grails\2.3.7
You do not have sufficient privilege to perform this operation.

Unfortunately, the gvm current command will not work, but groovy and grails will work fine.

$ gvm c
sh.exe": readlink: command not found
sh.exe": readlink: command not found
sh.exe": readlink: command not found
sh.exe": readlink: command not found
sh.exe": readlink: command not found
sh.exe": readlink: command not found
sh.exe": readlink: command not found
sh.exe": readlink: command not found
sh.exe": readlink: command not found
No candidates are in use

Speedy Disk Imaging

One way to stress test the hell out of a computer, by using an Ubuntu Live USB stick to create a disk image:

  1. Install pigz with sudo add-apt-repository "deb $(lsb_release -sc) main universe restricted multiverse" && sudo apt-get update && sudo apt-get install pigz
  2. Create and compress a disk image using all processor cores at once with dd if=/dev/sda bs=1M | pigz -9cv > disk-image.gz

Watch as all cores saturate with work!


So 6 cores can compress at ~95MB/second:

<stdin> to <stdout> 57241+1 records in
57241+1 records out
60022480896 bytes (60 GB) copied, 634.959 s, 94.5 MB/s

Resulting in a 6:1 compression ratio:

-rw------- 1 ubuntu ubuntu 9308731557 Mar 16 19:12 disk-image.gz

Power Hungry Desktops

I’ve been mucking about with a Linux desktop again, and doing electrical power measurements to figure out how efficient it is. Most home users probably aren’t thinking about this, as the difference between 100W and 200W is inconsequential to them. But I’m curious about processing capacity per unit power, or perhaps processing capacity per CPU core. When you consider that it takes about 1 pound of coal to produce a kilowatt hour of electricity (equivalent to running a computer using 100W, for 10 hours), the difference is no longer inconsequential over even normal periods of operating time.

At the moment, my usage pattern bounces between two systems: a Macbook Pro from 2009, and a Dell desktop from 2011.

The Macbook Pro has an Intel Core 2 Duo P8400 processor, which according to this performs at an abstract level of 1484. That works out to a performance level of 742 per processor core. It does feel slower using this system, when I’m developing and compiling software, but then it uses half the power of the bigger system (100W).

The Dell desktop has an AMD Phenom II X6 1055T Processor, which according to this performs at an abstract level of 5059. This works out to a performance level of 843 per processor core. The system uses 250W overall, to run everything.

But let’s say I’ve been thinking about buying a new Macbook Pro with Retina Display. The late-2013 model uses an Intel Core i5-4258U processor, which according to this performs at an abstract level of 4042, which works out to a performance level of 2021 per processor core. If its processor cores are 2.5 times the performance of my current Macbook Pro, and at least twice the speed of the Dell desktop, there’s a good chance that for many single-threaded apps the overall experience of using the device would be better anyway. And let’s face it, most of the time the user-interface is running on a single thread anyway. If the system also only draws 100W at idle (likely less, given the improvement in process technologies), then it offers almost the same amount of performance at half the energy consumption, which is a huge win.

The trouble with all existing processors is the fact that they can’t completely shut off processor cores when they aren’t needed. If 99% of the time, I’m idle at the computer, and it’s able to handily process everything I’m doing, then the power used in running extra cores all of the time even at the lowest C-state seems like a terrible waste.

Power Hungry GPUs

One other thing that struck me as a bit odd is the fact that when I hook up a second monitor to the desktop, the power utilization measured at the wall jumps from 128W (idle) to 200W (idle). Powering each monitor uses about 20W, so I can only assume that the graphics card is chewing up the 50W difference, but I don’t understand how the GPU architecture can be so power hungry or the drivers can be so poor. It doesn’t make sense to me that the difference between driving one monitor and two is a 60% increase in total power consumption.

In a nutshell, this desktop system is burning 2 pounds of coal every 10 hours, which seems a bit much since it spends 99% of its time idling.