Boston officials investigating this week’s marketing campaign gone awry should be sure to include themselves in the scrutiny, asking if they overreacted to the incident.


A In case you missed the story, Cartoon Network, a division of Time Warner’s Turner Broadcasting, recently launched a “guerrilla marketing campaign” to promote its new adult-audience cartoon Aqua Teen Hunger Force. As part of the campaign, the network hired New York marketing firm Interference Inc. to place notepad-sized, electronically lit signs of the show’s “mooninite” characters in unusual locations around urban areas.


The campaign received little notice in New York, Los Angeles, Chicago, Atlanta, Seattle, Portland, San Francisco, Philadelphia, and Austin, Texas. But in Boston, public officials treated the signs as a possible terrorist threat, closing bridges, subway stations, roadways, and even part of the Charles River while bomb squads removed the signs.


Once the nature of the signs became known, Boston mayor Thomas Menino issued a press release blasting the campaign:

It is outrageous, in a post‑9/​11 world, that a company would use this type of marketing scheme. I am prepared to take any and all legal action against Turner Broadcasting and its affiliates for any and all expenses incurred during the response to today’s incidents.

Estimates for those expenses have already topped $1 million.


Boston officials’ initial concern is understandable and appropriate. Seeing an out-of-place object containing batteries, circuitry, and glowing lights is unsettling in these times and it should be investigated. But at what point should Boston officials have realized that the signs posed no threat, and called off the bomb squads?


This raises an issue that we often discuss here at Cato, and that has become especially important in the post‑9/​11 era: should we be more concerned about Type‑1 errors (false positives) or Type‑2 errors (false negatives)?


Detection systems, whether mechanical (burglar alarms, ultrasounds) or human (analysts, emergency services workers) are rarely error-free. Often, we have to decide whether we want a very sensitive detection system that likely will detect any real problem but also subjects us to Type‑1 errors, or else a less sensitive system that likely won’t give us many false alarms but may also miss a real problem.


Boston officials’ bomb-squad response to the mooninite signs is a perfect example of a Type‑1 error produced by a highly sensitive detection system. I suspect that government officials would defend the high sensitivity, saying “it’s better to be safe than sorry.”


But Type‑1 errors can end up making us feel very sorry. The current Iraq War can be considered a Type‑1 error resulting from the Bush administration’s high sensitivity to the threat posed by Saddam Hussein’s regime.


Or consider the 2002 Beltway sniper attacks, during which local schools publicized that they were in “lockdown mode” and keeping schoolchildren indoors — that is, they went into “better safe than sorry” mode. The snipers later told police that the schools’ pronouncements enticed the snipers to try to kill a child, and they ultimately wounded a 13-year-old as he arrived at his Bowie, Md., middle school.


For an excellent discussion of why 9/11 should not lead us to be too accepting of Type‑1 errors, read Ohio State University national security professor John Mueller’s article “A False Sense of Insecurity?