Detailed Error Emails For Django In Production Mode

Sometimes when you’re trying to figure out an issue in a Django production environment, the default exception tracebacks just don’t cut it. There’s not enough scope information for you to figure out what parameters or variable values caused something to go wrong, or even for whom it went wrong.

It’s frustrating as a developer, because you have to infer what went wrong from a near-empty stacktrace.

In order to be able to produce more detailed error reports for Django when running on the production server, I did a bit of searching and found a few examples like this one, but rewriting a piece of core functionality seemed a bit weird to me. If the underlying function changes significantly, the rewrite won’t be able to keep up.

So I came up with something different, a mixin function redirection that adds the extra step I want (emailing me a detailed report) and then calls the original handler to perform the default behavior:

Note that by using this code, you do end up with two emails: the usual generic error report and the highly-detailed one containing details usually seen when you hit an error while developing the site with settings.DEBUG == True. These emails will be sent within milliseconds of one another. The ultimate benefit is that none of the original code of the Django base classes is touched, which I think is good idea.

Another thing to keep in mind is that you probably want to put all of your OAuth secrets and deployment-specific values in a file other than settings.py, because the values in settings get spilled into the detailed report that is emailed.

One final note is that I am continuously amazed by Python. The fact that first-class functions and dynamic attributes let you hack functionality in, in ways the original software designers didn’t foresee, is fantastic. It really lets you get around problems that would require more tedious solutions in other languages.

Python Parametrized Unit Tests

I’ve been testing some image downloading code on Tandem Exchange, trying to make sure that we properly download a profile image for new users when they sign in using one of our social network logins. As I was writing my unit tests, I found myself doing a bit of copy and paste between the class definitions, because I wanted multiple test cases to check the same behaviors with different inputs. Taking this as a sure sign that I was doing something inefficiently, I started looking for ways to parametrize the test cases.

Google pointed me towards one way to do it, though it seemed a bit more work than necessary and involved some fiddling with classes at runtime. Python supports this, of course, but it seemed a bit messy.

The simpler way, which doesn’t offer quite as much flexibility but offers less complexity (and less fiddling with the class at runtime), was to use Python’s mixin facility to compose unit test classes with the instance parameters I wanted.

So let’s say I expect the same conditions to hold true after I download and process any type of image:

  1. I want the processed image to be stored somewhere on disk.
  2. I want the processed image to be converted to JPEG format, in truecolor mode, and scaled to 256 x 256 pixels.
  3. I want to retrieve the processed image from the web address where I’ve published it, and make sure it is identical to the image data I’ve stored on disk (round trip test).

Here’s what that code might look like:

So what ends up happening is that the composed classes simply specify which image they want the test functions to run against, and the rest of the test functions run as usual against that input parameter.

One thing readers might notice is the seemingly backwards class inheritance. Turns out (you learn something everyday!) Python thinks about class inheritance declarations from right-to-left, meaning that in the above examples, unittest.TestCase is the root of the inheritance chain. Or another way to look at it is that, for example, GoodAvatar instances will first search in StandardTestsMixin then in unittest.TestCase for inherited methods.

Google Spreadsheet Geocoding Macro

I’ve been doing a bit of nerding around with a side project, which involves editing a bunch of addresses in Google Sheets and having to geocode them into raw lat/lng coordinate pairs.

google-sheets-geocode-macro

I went ahead and coded up a quick App Script macro for Google Sheets that lets you select a 3-column wide swath of the spreadsheet and geocode a text address into coordinates.

Update 10 January 2016:

The opposite is now true too, you can take latitude, longitude pairs and reverse-geocode them to the nearest known address. Make sure you use the same column order as in the above image: it should always be Location, Latitude, Longitude.

I’ve moved the source to Github here:
https://github.com/nuket/google-sheets-geocoding-macro

It’s pretty easy to add to your Google Sheets, via the Tools -> Script Editor. Copy and paste the code into the editor, then save and reload your sheet, and a new “Geocode” menu should appear after the reload.

Update 15 March 2021:

I’ve added code to allow for reverse geocoding from latitude, longitude pairs to the individual address components (street number, street, neighborhood, city, county, state, country).

Python Code Coverage and cron

Every now and then, it’s useful to get a sense of assurance about the code you’re writing. In fact, it might be a primary goal of your organization to have functional code. Who knows?

Although I began development of Tandem Exchange following a test-first development process, the pace of change was too rapid. It’s not that I didn’t appreciate the value of testing. At the very beginning, I did implement a large number of tests. It’s just that those tests were written against soon-to-be-obsolete code and I didn’t have the time to develop new functionality and write unit tests simultaneously. Before the prototyping phase had ended, I learned the hard way that it didn’t really make sense to write many of those tests, when such a huge fraction of early functional code ended up in the dustbin.

Once things settled down, I started to leverage the Python code-coverage module alongside newly-written unit tests, made simple by using the nose test runner, which is a fantastic tool for test auto-discovery.

I then added the nose test runs to the development-site crontab, to generate coverage and unit test statistics on a regular basis:

@daily  /usr/local/bin/python2.7 /path/to/nosetests -v --with-coverage \
        --cover-package=exchange --cover-erase \ 
        --cover-html --cover-html-dir=/path/to/webdir/coverage --cover-branches \
        exchange.search_tests exchange.models_tests

All you have to do is specify a handful of extra options to the nosetests command line, it’s practically a freebie. Especially useful are the --cover-html and --cover-html-dir options, which tells nosetests to place the coverage reports in a specific directory.

In our case, I created a directory on the webhost, where I can log in and check the report results, which look something like:

coverage-clipping

The coverage reports show which Python statements (lines) have been exercised by the unit tests that have been run. Green lines have been run at least once, red lines have not been run, yellow lines indicate that not all of the conditions of a branch have been tested. (i.e. If you have an “if” statement, you have to test it for both True and False conditions, otherwise known as Modified Condition / Decision Coverage.) Note, however, that a coverage test does not prove that a piece of code behaves the way you expect, only that it has been run. The unit tests are the bits exclusively responsible for proving behavior.

In any case, I’ve already isolated two issues through the unit tests and am now assured that they will never come back. And as the percentage of statements covered by unit tests continues to increase, I’m sure any remaining issues will shake out. Which is the whole point, isn’t it?

Terrible Connectors

Pretend you’re the biggest manufacturer of gearshifts and mechanical accessories for bicycles. You invent a fantastic range of hub-mounted electric generators for bicycles, which are intended to power the lights as a person pedals along. Your products are reliable, long-lasting, and mostly free of required maintenance. But you decide to skimp on a sensible mechanical connector for the electrical output from your generator products, instead asking the dumbest junior engineer in the office to design a connector for you. What would that look like?

Probably something like this (from source):

terrible-connections

A friction-fit cable connection where you pray that the wires don’t cross and that the wires are thick enough to rub against the generator contacts tightly enough.

Are you kidding me, Shimano? Please, please, please, for the love of God, talk to these people.