As many people have pointed out, my comic about tweets outrunning seismic waves seems to have been widely verified in yesterday’s earthquake:

It’s always nice to see real-life confirmation of your calculations! The quake started in Virginia at 13:51:04 EST, where most of my family lives.  Texts from my brother in Charlottesville (25 miles from the epicenter) were slowed down by the spike in cell traffic, but I got an IRC message from my brother in Newport News, VA at 13:52:09. Based on USArray/EarthScope detector readings posted at Bad Astronomy, his message overtook the seismic waves outside Philadelphia, and reached New England over a minute before the quake was felt there.

I once heard a story (originally told by Kevin Young) about Gerson Goldhaber, who was a physicist at Lawrence Berkeley National Lab. He was talking on the phone with another physicist at SLAC near Stanford University near the end of the day on Tuesday, October 17, 1989. The SLAC physicist suddenly interrupted with, “Gerson, I have to go! There’s a very big earthquake happening!” and then hung up. So Gerson stepped out into a group of people in the hall, made a big show of yawning and checking his watch, then said, “Aren’t we about due for an earthquake?” Before anyone could respond, the Loma Prieta earthquake reached Berkeley, and he became a legend.

My best friend from college is from Mineral, VA, a town of a few hundred people and one stoplight, which was at the epicenter of yesterday’s quake. A few years ago, he moved to Sendai, Japan, where he got an apartment just a few miles from the coast. Fortunately, he survived the March earthquake, tsunami, and nuclear meltdown. Last I heard from him, he was moving back home. He really can’t catch a break. Fortunately, it sounds like there’s not too much damage. (Though from what I remember of Mineral, I can’t help but wonder—if the quake did cause damage, how would you tell?)


I’m at a family reunion, where a YouTube-watching party inspired today’s comic. I woke up to find several emails letting me know, to my dismay, that the comic Doghouse Diaries has already done a similar strip about the same experience.

I linked to their site last year when I posted my color survey results, but I confess I hadn’t read through their archives, so I think this was just a case of parallel inspiration. Still, I really like their version and I’m sorry for the repetition!

Happy 4th of July, and to those of you spending it with family, enjoy sitting through your parents watching Double Rainbow for the first time!

Family Illness

Last fall I posted about a family illness, but didn’t give a lot of details.

In October my fiancée was diagnosed with stage III breast cancer. It’s rare for young women to get breast cancer, and she’s otherwise healthy and has no family history, so it was a real bolt from the blue.

She’s been in nonstop treatment for the last eight months, which has been an emotional and physical ordeal that’s hard to describe. We both have all the support we could ask for—including an incredible medical team—and we’ve had some really good moments during these months, but it’s still a terrifying and isolating experience. Treatment is ongoing, and there’s no well-defined end point; things are going to continue to be scary and difficult for a while.

I’m usually pretty private about my personal life, but I wanted to explain why I’ve missed some midnight comic deadlines and have been particularly hard to reach lately. I’ve also spent a lot of these eight months immersed in cancer science, and I want to be free to talk (and draw comics) about stuff I’m learning without the unexplained subject matter leaving everyone worried and guessing.

Thank you so much for your patience, kind words, and all the little flash games you all sent. And all the best to those of you who are also caring for someone with cancer, or who are struggling with cancer yourselves.

Answering Ben Stein's Question

Ben Stein published a pretty awful editorial defending Dominique Strauss-Kahn, the IMF head arrested for sexual assault. Now, I don’t disagree with him about the presumption of innocence, but the rest of the article effectively argues that smart, rich people simply don’t commit crimes. In particular, he says this:

In life, events tend to follow patterns. People who commit crimes tend to be criminals, for example. Can anyone tell me any economists who have been convicted of violent sex crimes?

On a whim, I just did a little research, and couldn’t believe what I found.  Guess who holds an economics degree?

Paul Bernardo.

For those not familiar with the case, Bernardo is one of the nastiest serial killers in history. He and his wife drugged, raped, and tortured to death a number of schoolgirls in the late 80’s and early 90’s. The story is the stuff of nightmares.

I’ll leave the debate over the rest of Mr. Stein’s article to others. But as for his suggestion that studying economics precludes becoming a violent sex criminal, it seems history provides one hell of a counterexample.

Edit: James Urbaniak has a list of some other economists involved in sex crimes.

Michael Bay's Scenario

Last year I drew a comic about the oil spill in which Michael Bay spun an over-the-top worst-case disaster scenario. One of the panels was actually slightly more plausible than the others. It was based on a real disaster which almost happened in 1973, and in two weeks it may come closer to happening than ever before.

I learned about this from John McPhee’s The Control of Nature (adapted from this article), a book that my mom gave me as a kid (Happy Mother’s Day!).  I’m not any sort of an expert on the subject, but here’s what I’ve learned so far:

Every thousand years or so, the lower Mississippi changes course.  It piles up enough silt at its delta that it ‘spills over’ to a new shortest path to the ocean. At times, the outlet has been anywhere from Texas to the Florida Panhandle.

Since the early 20th century, the Mississippi has been trying to change course again—sending its main flow down the Atchafalaya river, which offers a much shorter, steeper path to the ocean.  The Army Corps of Engineers was ordered by Congress to keep that from happening.  The center of their effort is the Old River Control Structure, which limits the flow down the Atchafalaya to 30%.

Every now and then there’s a massive flood which stresses the system. The fear is that if the Mississippi ever broke through the ORCS and the main flow was captured by the Atchafalaya, it would be very hard or virtually impossible to return it to its old route. This would devastate the people and industries around in Baton Rouge and New Orleans who depend on the river (as if they haven’t had enough problems lately).  This almost happened in 1973, when a massive flood undermined the structure; this was the subject of John McPhee’s book.

They’ve since strengthened the structure, but the coming flood is quite a bit larger than the one in 1973.  In order to save New Orleans and Baton Rouge, they have to send some of the floodwaters down the Atchafalaya.

Here is the working plan for routing the water from a nightmare flood:

The Mississippi River Commission document outlining the plan is here.

This plan, put together after the devastating 1927 floods, is based around the estimate of the largest possible flood the Mississippi could ever experience.  In theory, the system is capable of handling such a flood, although much of it has never been put to the test.

The current flood moving down the Mississippi is going to stress this system to near its limit.  Here’s a version of that map with the current flow rates, with the approximate expected coming flood shown at the top:

This is based on the diagram at the ACOE Mississippi River page, which is updated daily with new flow rates.

The floods above the system are expected to crest 6′ higher than in the 1927 flood, the highest in recorded history, and 7′ higher than the 1973 flood that almost destroyed the ORCS.  Here’s the gauge just above the structure as of noon on May 8th:

The current Natchez gauge can be seen here.

The Morganza spillway has only been opened once (to take the stress off the failing ORCS in 1973), and then only partly. It’s fairly clear at this point that the Morganza spillway and the Bonnet Carré spillway will both be fully opened to route the flow away from New Orleans (which is expected to crest just a few feet below the tops of the levees there).

I have no idea how likely the Old River Control and Morganza structures are to fail, or whether a rerouting of the Misssissippi through a new channel would be irreversible.  You can read some speculation on this here.

Additional resources:

Wunderground blogger Barefootontherocks maintains a page full of resources on the current Mississippi flood, and there’s a lot of information in the comments.  The excellent Jeff Masters will probably have a post on the subject in the next few days. You can see more gauges and a ton of information at the NWS page on the lower Mississippi.

Michael Bay can be reached here.

Radiation Chart Update

Ellen and I made our radiation chart in the early days of the Fukushima disaster. I intended it to provide context for radiation exposure levels reported in the media.  I included a few example doses from monitoring sites around Fukushima (the only ones I could find at the time). But our main goal was to give people a better understanding of what different radiation levels meant.  It wasn’t a guide to what was happening at Fukushima because neither of us had hard data on that.

I’ve recently corrected a few things on the chart (the old version is available here). In particular, I’ve changed the mammogram dose from 3 mSv to 0.4 mSv, based on figures from this paper.  The other figures seem to hold up, and I’ve made only small corrections elsewhere.  I’ve added a few more Fukushima-related doses where I could find data, but they’re examples only—not full coverage of the effects.  Specifically, I added total exposure figures over the weeks following the accident for Tokyo, a typical spot in the Exclusion Zone, and a station place on the northwest edge of the zone that got a particularly heavy dose. Those data came from here (Google cache of now-dead MEXT page) and here.

Unfortunately, the disaster has progressed beyond simple radiation releases—there’s some amount of contaminated water, and radioactive material potentially getting in food. When radioactive material is ingested, the effects get a lot more complicated, and depend on what isotopes are there and how they’re processed by the body. Ellen’s page has a bit more information about that.

For reliable information on what’s happening in Japan, including discussions of the contamination levels, there are two sites Ellen and I recommend. One is the MIT Nuclear Science and Engineering hub, which posts periodic articles explaining aspects of the disaster, and the other is the International Atomic Energy Agency’s Fukushima Accident Update Log, which has detailed measurements from a variety of sources.

Note: Some people questioned the side-by-side comparison of short- and long-term doses.  It’s true that they’re not always the same, and I mentioned this in the intro note on the chart. Combining the two sacrificed precision for simplicity, but I don’t think it was a huge stretch—most regulatory dose limits are specified in terms of a total yearly (or quarterly) dose, which is a combination of all types of exposures.  And for those low doses, the comparison is pretty good; the place duration becomes important is up in the red and orange zones on the chart.

Radiation Chart

There’s a lot of discussion of radiation from the Fukushima plants, along with comparisons to Three Mile Island and Chernobyl. Radiation levels are often described as “<X> times the normal level” or “<Y>% over the legal limit,” which can be pretty confusing.

Ellen, a friend of mine who’s a student at Reed and Senior Reactor Operator at the Reed Research Reactor, has been spending the last few days answering questions about radiation dosage virtually nonstop (I’ve actually seen her interrupt them with “brb, reactor”). She suggested a chart might help put different amounts of radiation into perspective, and so with her help, I put one together. She also made one of her own; it has fewer colors, but contains more information about what radiation exposure consists of and how it affects the body.

I’m not an expert in radiation and I’m sure I’ve got a lot of mistakes in here, but there’s so much wild misinformation out there that I figured a broad comparison of different types of dosages might be good anyway. I don’t include too much about the Fukushima reactor because the situation seems to be changing by the hour, but I hope the chart provides some helpful context.

(Click to view full)

Note that there are different types of ionizing radiation; the “sievert” unit quantifies the degree to which each type (gamma rays, alpha particles, etc) affects the body. You can learn more from my sources list. If you’re looking for expert updates on the nuclear situation, try the MIT NSE Hub. Ellen’s page on radiation is here.

Lastly, remember that while there’s a lot of focus on possible worst-case scenarios involving the nuclear plants, the tsunami was an actual disaster that’s already killed thousands. Hundreds of thousands more, including my best friend from college, are in shelters with limited access to basic supplies and almost no ability to contact the outside world. If you’re not sure how to help, Google’s Japan Crisis Resource page is a good place to start.

Edit: For people who asked about Japanese translations or other types of reprinting: you may republish this image anywhere without any sort of restriction; I place it in the public domain. I just suggest that you make sure to include a clear translation of the disclaimer that the author is not an expert, and that anyone potentially affected by Fukushima should always defer to the directives of regional health authorities.


Every now and then, I stumble on a Wikipedia passage that makes me smile. I don’t usually share them, since calling attention to them almost certainly means they’ll be rewritten or deleted, but in this case I can’t resist. The following is from the Bracket article:

Parentheses may also be nested (with one set (such as this) inside another set). This is not commonly used in formal writing [though sometimes other brackets (especially parentheses) will be used for one or more inner set of parentheses (in other words, secondary {or even tertiary} phrases can be found within the main sentence)].[citation needed]

To the three anonymous editors who together wrote this paragraph, thank you for brightening my day.

Distraction Affliction Correction Extension

Lots of people have asked me for the system I used to implement the restriction in the alt-text of today’s comic.

At various times, I thought of doing it with an X modification, Firefox extension, a Chrome add-on, an irssi script, etc—but none of them worked too well (or involved a lot of sustained undistracted effort, which was sort of a Catch-22).  Then I hit on a much simpler solution:

I made it a rule that as soon as I finished any task, or got bored with it, I had to power off my computer.

I could turn it back on right away—this wasn’t about trying to use the computer less. The rule was just that the moment I finished (or lost interest in) the thing I was doing, and felt like checking Google News et. al., before I had time to think too much, I’d start the shutdown process.  There was no struggle of willpower; I knew that after I hit the button, I could decide to do anything I wanted. But if I decided to look at a website, I’d have to wait through the startup, and once I was done, I’d have to turn it off again before doing anything else. (This works best if your ongoing activities are persistent online—for example, all my IRC chat is through irssi running in screen, so turning off my laptop doesn’t make me sign out.)

Other ‘honor system’ approaches have never worked for me.  Blocking the sites (or keeping the computer off) didn’t work—I could always find a way to argue with myself. I’d decide this day needed to be an exception for some reason, think of a project that required the computer, or just grow frustrated after a few hours and get really curious about something I’d seen a website somewhere.  There’s some interesting research about novelty and dopamine, suggesting (tentatively) that for some people exposure to novelty may activate the same reward system that drug abuse does.  In my case, I felt like my problem was that whenever I was trying to focus on a (rewarding) project, these sites were always in the background offering a quicker and easier rush.  I’d sit down to write code, draw something, build something, or clean, and the moment I hit a little bump—math I wasn’t sure how to handle, a sentence I couldn’t word right, an electronic part I couldn’t find, or a sock without a mate—I’d find myself switching to one of these sites and refreshing.  Reward was briefly unavailable from the project, but constantly available from the internet.  Adding the time-delay removed the promise of instant novelty, and perhaps helped disconnect the action from the reward in my head.  Without that connection dominating my decisions, I could think more clearly about whether the task was really important to me.

Beyond that one rule, I put no other restrictions on myself.  Want to go read a 17-part Cracked article?  Fine!  Think you might have an important email?  Go check.  Feel like looking at Reddit for the 20th time today?  Go for it; you might find something interesting (hey, it’s where I found that dopamine article).  Want to play Manufactoria until your eyes bubble?  Absolutely.  The only catch is that you have to stare at a startup screen for 30-60 seconds first. (If you have one of those instant-boot laptops, you’re out of luck.)

It was remarkable how quickly the urges to constantly check those sites vanished. Also remarkable was that for the first time in years, I was keeping my room clean. Since the computer was no longer an instant novelty dispenser, when I got antsy or bored I’d look around my room for a distraction, and wind up picking up a random object and putting it away.

I’ve since relaxed this restriction; the family health situation I mentioned a while back has meant that I’ve had less free time lately, and when I do, mindless distractions have been welcome (thank you again to everyone who sent in games!). But just following this system for a short time was enough to break most of my distracting website habits completely, and when things return to normal around here I’ll probably start using it again.

There’s still a place for a browser extension, though.  A lot of peoples’ jobs require them to be on a computer running something all the time, or can’t shut down for other reasons, so my quick turning-the-computer-off trick won’t work for them.  None of my abortive attempts are worth building on, but if someone’s looking for a quick project, building an extension like this might be a good one.  It could let you impose a delay like this on loading a new page, or a page outside the current domain, or refreshing a page you’re already on (and no, just running the browser under Vista on a Pentium-133 doesn’t count).  If anyone makes a good one, I’d be happy to share it here .  Just post a link in the comments!

Trochee Chart

Here’s something I made as I drew today’s comic.  It’s a chart of Google results for “X Y” (in quotes) where X and Y are words from the first panel of the strip.  The first word is on the top, the second down the side (the opposite of the intuitive way, of course).

"Doctor Doctor" and "Jesus Jesus" are highest. The highest non-repeating combo is "Pirate Captain", followed by "Robot Monkey" and "Penguin Zombie".

I generated this using a Google API variable search tool developed by Eviltwin on #xkcd (I’m not linking to the tool so as to avoid potentially getting his API key revoked) Edit: He now offers the source and says it can be run without a key, and is happy to let people use it until Google does something. Not only is the API helpful in making these kinds of charts (which I spend more time doing than I care to admit), it also gives a roughly accurate count of results—in contrast to the Google search page.

The “number of results” count that Google gives when you search is clearly fabricated.  This is clear for a few reasons.  When Google says this:

Excellent!  That's a lot!

You can tell that it’s wrong first by scrolling to the end of the results.  When you get to page 32, it suddenly becomes:

I learned in AP Calculus that 316 is WAY less than 190,000.

This doesn’t usually matter, since nobody looks much past the first few pages of results, but it’s annoying if you’re trying to use the number of results as a measure of something.  When I was making the Numbers comic, I didn’t use the API, and there were a few graphs I had to throw out, crop, or put on an unnecessary log scale; otherwise, Google’s clumsy number-fudging made the graphs look nonsensical.  I can’t find a good example now (perhaps they’ve smoothed it out a bit) but when searching for things like “I was born in <X>”, the results for successive years would look something like this:

… 150 : 200 : 250 : 300 : 350 : 117,000 : 450 : 251,000 : 500 : 550 : 312,000 : 320,000 : 390,000 : 425,000 …

If you scrolled to the last page for each, you’d find that the smaller counts were roughly accurate, but the counts in the hundreds of thousands had no more actual results than their neighbors.

I suppose it’s remotely possible that these numbers are correct, there are no years with an in-between number of hits, and for some reason they’re just not showing you most of the promised pages when you try to flip through them.  But making this even less likely is the fact that the search API (which is apparently being deprecated and replaced right now) doesn’t return these bad numbers—it gives reasonable-looking results which seem to be roughly consistent with the number you come up with by navigating to the last search page.

So it really looks like there’s a certain threshold of result volume beyond which Google apparently says “screw it” and throws out a gigantic number.  I imagine this is probably due to incompetence rather than intentional deception; I’m sure it’s hard to generate pages quickly from many sources, and maybe for searches with a lot of results they don’t have time to get it all synced up.  So they fudge the numbers.  The fact that this makes it look like they have way more results than they do is presumably just an unintended bonus.

All in all, this isn’t a big deal and I don’t think there’s anything particularly evil about it. It does make it hard to use Google hits as an accurate gauge of anything, but I suppose if you’re trying to study something by seriously analyzing Google result counts, you have bigger methodological problems to worry about.

Edit: As Mankoff observes, it looks like the API sometimes *underestimates* the number of results, too.  For example, it still reports 0 results for “narwhal zombie”, when a regular search shows quite a few. Now, I notice, scrolling through them, that most either have some minor character/text in between the two words, or are related to the comic I just posted.  But at least one seems to date back to last year.