Detailed Error Emails For Django In Production Mode

Sometimes when you’re trying to figure out an issue in a Django production environment, the default exception tracebacks just don’t cut it. There’s not enough scope information for you to figure out what parameters or variable values caused something to go wrong, or even for whom it went wrong.

It’s frustrating as a developer, because you have to infer what went wrong from a near-empty stacktrace.

In order to be able to produce more detailed error reports for Django when running on the production server, I did a bit of searching and found a few examples like this one, but rewriting a piece of core functionality seemed a bit weird to me. If the underlying function changes significantly, the rewrite won’t be able to keep up.

So I came up with something different, a mixin function redirection that adds the extra step I want (emailing me a detailed report) and then calls the original handler to perform the default behavior:

Note that by using this code, you do end up with two emails: the usual generic error report and the highly-detailed one containing details usually seen when you hit an error while developing the site with settings.DEBUG == True. These emails will be sent within milliseconds of one another. The ultimate benefit is that none of the original code of the Django base classes is touched, which I think is good idea.

Another thing to keep in mind is that you probably want to put all of your OAuth secrets and deployment-specific values in a file other than, because the values in settings get spilled into the detailed report that is emailed.

One final note is that I am continuously amazed by Python. The fact that first-class functions and dynamic attributes let you hack functionality in, in ways the original software designers didn’t foresee, is fantastic. It really lets you get around problems that would require more tedious solutions in other languages.

Django Testing: Creating And Removing Test Users

During the development of Tandem Exchange, I wanted to write some test routines to check the validity of the core search functions that figure out which of the users would be good matches for one another as language tandem partners.

There are a number of ways to do this, Django’s built-in unit test infrastructure is a bit limited in that it attempts to create and drop a new database each time, to let the test have an independent and clean database in which to work. But this isn’t something that’s possible with most shared webhosters, for obvious reasons, so the Django unit test infrastructure is less than useful in these cases.

But you don’t really need to create/drop databases all the time. In my case, the schema stays the same throughout, all I want to do is add and remove users on the non-production database and check that the search functions work.

Here’s how I am planning to do it, via the unittest.TestCase setUp() and tearDown() methods:

Then all you need to do for any particular unit test is to remove those users created in the duration. This isn’t something you can use in a production database, since there will be a natural race condition between the setUp() and tearDown() calls. But this should work just fine in a non-production environment, where no one’s signing up while you’re running tests.

Update: Here’s what the unittest.TestCase code looked like, in the end. Note that you must evaluate the QuerySet expression immediately in setUp() and tearDown() as failure to do so causes them to both be lazily-evaluated at the users_to_remove assignment, which gives you an empty set.

Locale Fun

Peter and I have been updating the Tandem Exchange site translations over the past few weeks, and are now able to announce support for Arabic, Japanese, Korean, Simplified Chinese, and Traditional Chinese as first-class UI languages on the website in addition to the original German, Spanish, French, Italian, Portuguese, and Russian translations that we did first.

There was a moment on Monday when I let slip that the website was starting to take on the feel of a “real” website, and that is true. We now have 12 full translations of the website, which Google can grok for each of its region-specific search indices, and which we can serve to a wide swath of users around the world.

One of the weirder things about multiple-language support learned along the way, is the way the HTTP Accept-Language header formats a user’s UI language preference vs. the way GNU gettext specifies the same information vs. the way the Unicode Consortium specifies the same information vs. the way Facebook, Google, and Twitter specify the same information. And then how Django chooses to use that information when setting the site’s presentation language and looking up the correct gettext catalogs.


The HTTP Accept-Language header usually looks something like: “Accept-Language:en-US,en;q=0.8” where the language codes are specified by a massive, long-winded standard called BCP47. The standard defines a language code to look something like en-US or zh-CN or zh-TW, but the standard doesn’t really bother giving a list of common language codes, leaving that instead to the combinatorial explosion that results by mixing together one entry from each of the categories of the IANA Subtag Registry.


In the Django file, you specify the list of languages that the website is supposed to support, and you do this using language-code strings that look like this, where the first entry in the tuple looks roughly like BCP47:

    ('ar', 'العربية'),
    ('de', 'Deutsch'),
    ('en', 'English'),
    ('es', 'Español'),
    ('fr', 'Français'),
    ('it', 'italiano'),
    ('ja', '日本語'),
    ('ko', '한국어'),
    ('pt', 'português'),
    ('ru', 'ру́сский'),
    ('zh-cn', '简体中文'),
    ('zh-tw', '繁體中文'),

Note, however, that the language-code is strictly lowercase and uses a hyphen as a separator.


When building the translation catalogs for Django, you have to use gettext’s locale-naming format, which looks something like en_US or zh_CN.

So when you’re looking at your app’s locale/ subfolder, it will end up containing directories looking something like:

$ ls locale
ar  de  en  es  fr  it  ja  ko  pt  ru  zh_CN  zh_TW

Note the difference in the separator (hyphens vs. underscores) and the region code (lower vs. upper case). On a case-sensitive filesystem, you have to get this exactly right.

Unicode Consortium CLDR

But let’s say you’re also using the Unicode Consortium’s Common Locale Data Repository to generate some information instead of having to write it all out yourself? They use a slightly different set of language/region/locale identifiers, which are generally sensible but in the case of the Chinese languages, use zh_Hans and zh_Hant as parts of the filenames and language identifiers:

ee_TG.xml       fr_CF.xml        kw_GB.xml       sah_RU.xml       yo.xml
ee.xml          fr_CG.xml        kw.xml          sah.xml          zh_Hans_CN.xml
el_CY.xml       fr_CH.xml        ky_KG.xml       saq_KE.xml       zh_Hans_HK.xml
el_GR.xml       fr_CI.xml        ky.xml          saq.xml          zh_Hans_MO.xml
el.xml          fr_CM.xml        lag_TZ.xml      sbp_TZ.xml       zh_Hans_SG.xml
en_150.xml      fr_DJ.xml        lag.xml         sbp.xml          zh_Hans.xml
en_AG.xml       fr_DZ.xml        lg_UG.xml       se_FI.xml        zh_Hant_HK.xml
en_AS.xml       fr_FR.xml        lg.xml          seh_MZ.xml       zh_Hant_MO.xml
en_AU.xml       fr_GA.xml        ln_AO.xml       seh.xml          zh_Hant_TW.xml
en_BB.xml       fr_GF.xml        ln_CD.xml       se_NO.xml        zh_Hant.xml
en_BE.xml       fr_GN.xml        ln_CF.xml       ses_ML.xml       zh.xml
en_BM.xml       fr_GP.xml        ln_CG.xml       ses.xml          zh.xml~
en_BS.xml       fr_GQ.xml        ln.xml          se.xml           zu.xml
en_BW.xml       fr_HT.xml        lo_LA.xml       sg_CF.xml        zu_ZA.xml
en_BZ.xml       fr_KM.xml        lo.xml          sg.xml


Facebook passes back the locale of their users like so: ar_AR, zh_CN, zh_TW, and there’s a list of their supported languages and locales here. Of the bunch, they’re the most consistent, sticking to a [2-letter language code + 2-letter ISO country name] combo.


Google passes back the locale of their users like so: ar, zh-CN, zh-TW, and there’s a list of their supported languages and locales here. They mix and match pure 2-letter language codes with [2-letter language code + 2-letter ISO country name] and even [2-letter language code + 3-letter ISO region code] combos. Sigh.


Twitter generally passes back the locale of their users using 2-letter language codes such as ar, de, en, etc., but for Chinese they pass back zh-cn and zh-tw for Simplified and Traditional Chinese. Of course, this is pure speculation, because the most detailed available info about this comes by reading the source of their Tweet button generator.


Needless to say, it all gets a bit confusing. But the point is this:

  1. In the Django settings.LANGUAGES list, strip down your language codes to 2-letters if possible, use all-lowercase, and hyphens to separate a [language code – region code] identifier, or Django will complain.
  2. On disk, make sure your translation files are in directories like locale/en, locale/zh_CN, and so on, with underscores and capital letter region codes where necessary.
  3. And if you’re ever using OAuth to authenticate your incoming users, make sure to process the locale information coming from Facebook, Google, or Twitter, into the lower-case, hyphen-separated form used by Django, before you write it into the user’s profile.

One Final Thing

It was definitely interesting to see what kind of changes to the styling and layout were necessary to support the Arabic language. One of the tricky things about making sure things style properly in right-to-left mode is the fact that CSS float: property and text-align: property do not take on opposite meanings when you set the text-direction in the body content.

So in our case, we had a panel that relied on floats to style properly.

In the left-to-right case, it looks like:


In the right-to-left case, it looks like:


To get it to do this, we had to add a {% if LANGUAGE_BIDI %}rtl{% endif %} rule to each of the CSS classes that needed the explicit float: property change, then specify those styles like so:

And that’s it, for now.

Django Forms Need To Die

There are so many Web 1.0 holdovers built into it that seriously impede rapid Web 2.0 development.


Partially saving forms. Good luck with that. You cannot save parts of forms easily, you cannot perform validation on and save individual fields easily. It’s an all or nothing save. Or you have to roll everything yourself (again).

But that’s a whole other point. I don’t understand why web frameworks even care whether or not a whole form is “complete” and “valid” before saving them off to the database? Other parts of the application logic can determine whether all of the data they need is available, and react accordingly. In my case, I just added a is_user_valid() method, which checks to see that certain fields are in fact valid, before letting a user perform certain actions. (Robustness pretty much dictates that whatever processing code you have later on will need to be able to handle incorrect data anyway.)

Why does a web framework make it possible for a user to lose the majority of the things they’ve entered into a form, only because one single field was wrong?

Why isn’t it the other way around? Why doesn’t the framework save all of the form data that it can to the database model backend, and then prompt the user for a correction to the invalid field? Django’s Bound forms are just accidents waiting to happen in the User Experience. I know this, because during development, I’ve seen this, over and over again: You type something in, expect that it was saved (or worse, reload the page), and whoops, everything is gone. How stupid is that?

The design of Django’s form handling puts the majority of data at risk of being lost, due to the minority of data that is incorrect. This is just wrong-headed.

To solve the problem, I’ve been working on AJAX-based form-saving routines, but again, Django just gets in the way, and chokes whenever it sees a invalid input on a form you’re submitting.

Goddamn it Django, don’t worry about the whole form, just save what you can all the time and go for eventual consistency.

Update: The Django docs say this way at the bottom:

“Changed in Django 1.5: Django used to remove the cleaned_data attribute entirely if there were any errors in the form. Since version 1.5, cleaned_data is present even if the form doesn’t validate, but it contains only field values that did validate.”

But this is useless, if you’re using ModelForms. The stupid thing still raises a ValueError.

Update: Ok, it looks like one workaround for this is this one which I’ll repeat here:

Before saving the form, just set all of the fields to required = False, so:

Update: Actually, this whole form-processing mess makes me think we’ve forgotten the adage by the late Jon Postel: “Be liberal in what you accept, and conservative in what you send.” Anything that keeps people from redundantly retyping things should be placed above the form processor’s need for perfect data before it will do anything at all with that data.

Damnit, Django: has_usable_password()

Let’s say you have an account that you’ve marked as having an unusable password. The built-in Django password-change forms won’t let you just set an initial password on that account. You have to write a new view to handle the idea that someone who has an alternative authentication mechanism, might also want to set a password on their account if they want to login using a username + password combo locally.

Update: Ahh, there it is: “class SetPasswordForm, A form that lets a user change his/her password without entering the old password.” It’s in the docs after all, but not exactly the principle of least surprise at work here.