Despite all the rhetoric from Thomas Jefferson down to thelatest self-important musings of journalists about journalism’sbeing the first, best hope for a healthy polity, your newspaperis lying to you. While assuring you that it provides preciseinformation about public policy issues, in many cases it is only pushingspeculation and rumor in the guise of fact. Most of the time you haveno independent way to confirm its claims, so how can you tellwhen a newspaper is lying?

Here’s a hint: watch out for the numbers. Newspapers arefilled with contextless reports of the latest things government officialshave said or decided. But newspapers do like to throw in a numbernow and then to add verisimilitude to the tales they tell.

Knowledge of the media’s inability to get it straight,especially when dealing with numbers and statistics, has become widespreadenough to inspire a widely reviewed book — Tainted Truth: The Manipulationof Fact in America by Cynthia Crossen. It has also given rise toa new magazine, the quarterly Forbes MediaCritic, the latestaddition to the Forbes family of publications.

While ideologues of all persuasions like to blame mediainaccuracies on political biases, the causes of journalism’stroubles are, unfortunately, inherent in the way dailynewspapers, those first drafts of history, are written: hurriedlyand by generalists who, even if they are unfailingly scrupulous(which can’t always be assumed), are often ignorant of the topicson which they write and depend blindly on what others tellthem — and what others tell them is very often biased.Unfortunately, those first drafts of history are all mostlaypersons read.

The Problems with Numbers

Our intellectual culture is drunk on numbers, addicted tothem: we need them in every situation, we feel utterly dependenton them. As sociologist Richard Gelles aptly put it in a July 25,1994, Newsweek story on the media’s problems with numbers, “Reportersdon’t ask, ‘How do you know it?’ They’re on deadline. They justwant the figures so they can go back to their wordprocessors.” The culture of the poll dominates: the foolishnotion that not only every fact but every thought, whim, and emotionof the populace can be stated in scientifically valid andvaluable numbers.

The lust for numbers can, at its best, lead people to do hardresearch and dig up interesting and useful information. More often,however, it leads to dignifying guesses with misleadingly precisenumbers. For example, it wasn’t enough to know that people weredying in Somalia; as Michael Maren reports in the Fall 1994Forbes MediaCritic, reporters felt it necessary to latch ontosome relief workers’ guesses and repeat them over and over, onlyoccasionally letting slip honest acknowledgments that no onereally knew how many people were actually dying and that no onewas taking the trouble to attempt accurate counts.

The obsession with numbers leads to particularly egregiouserrors in reports on economic figures and aggregates and the federalbudget. Those errors include calling spending that doesn’t equalwhat had been planned a “cut” in spending, even if moreis being spent than the year before; relying on static economicanalysis, especially when calculating the effects of tax increasesand their concomitant revenues (because they assume that peopledo not change their behavior when their taxes are raised, membersof Congress and reporters make grievously wrong predictions aboutexpected revenues); and relying uncritically on numerical toolssuch as the Consumer Price Index.

Especially during the 1992 election, “quintile“analysis of the effects of the Reagan-Bush years on income andtax-burden equality abounded, with hardly any explanation of thecomplications of such analyses. Those complications include thefact that when people in lower income quintiles become richer,they often move into a higher quintile rather than buoy theaverage of the lower one. Yet income added to the highestquintile can do nothing but increase that quintile’s averageincome. That creates a misleading picture of the rich gettingmuch richer while the poor stagnate.

Quintile analysis is also static, but income mobility iscommon in America, so it’s not always the same people wholanguish in lower quintiles or whoop it up at the top. Andquintile analysis often relies on households, not individuals — the top quintile can have more than 20 percent of Americans, thebottom less than 20 percent. But all of those complications areoverlooked in the media’s craving for numbers to toss around.

The media even ignore the fact that “counts” ofmacroeconomic variables can change retroactively — 1993 data on1992 quantities can be different from 1994 data. As an example,in 1993 the Bureau of Labor Statistics listed Arkansas as thestate with the highest percentage rise (3 percent) in nonfarmemployment from July 1991 to July 1992. Candidate Clinton touted thatpercentage in campaign ads. But by March 1994 the facts hadchanged. Although Arkansas was then thought to have had a 3.7percent rise in employment during the 1991–92 period, it rankedbehind Montana’s 4.22 percent and Idaho’s 4.21.

Macroeconomic aggregates, such as gross national product, onwhich the media often rely for numerical ballast, are often riddledwith conceptual problems, such as that of counting as additionsto our national product any cash transactions, including theclassic example of neighbors’ paying each other to mow eachother’s lawns, and ignoring any noncash transaction that adds toeconomic well- being. Other economic numbers bandied about by themedia, such as unemployment rates, job growth, and the “cost“of various tax increases or cuts, are often derived from randomsamplings, self- reported information, and guesswork. Economicsis a study of human action, not of numbers; the press’s over dependenceon frequently dubious aggregates helps disguise the problem andmuddles readers’ understanding of what economics — andprosperity — is really about.

Where Do the Numbers Come from?

There are many ways to mislead while allegedly presentingaccurate counts or measures to the public. The most sinister isto simply make up numbers or make completely bald-faced guesses.That happens more often than you might think. The demand forinformation has far outstripped the supply. Coming up withreliable numbers to support all the things that journalists wantto say and the public wants to know is often prohibitivelyexpensive, in money or effort, or both. But the misuse and misunderstandingof numbers lead to erroneous reporting.

The total number of breast cancer victims has become a matterof much concern since the National Cancer Institute and the AmericanCancer Society frightened the world with the declaration that Americanwomen face a one-in-eight chance of contracting breast cancer.That scary figure, however, applies only to women who havealready managed to live to age 95; one out of eight of them willmost likely contract breast cancer. According to the NCI’s ownfigures, a 25-year-old woman runs only a 1‑in-19,608 risk.

Those very precise figures are themselves based on a phonynotion: that we know how many people have breast or any other can​cer​.As two journalists concerned about cancer admitted in the Nation (September26, 1994), “Not only is there no central national agency toreport cancer cases to … but there is no uniform way thatcases are reported, no one specialist responsible for reportingthe case.” So any discussion of cancer rates in the UnitedStates is based on guesswork, and one can only hope that theguesswork is based on some attempt to be true to the facts asthey are known.

In the case of other health threats, such as AIDS, we knowthat isn’t the case. In The Myth of Heterosexual AIDS, journalist MichaelFumento documented the discrepancy between the rhetoric about theplague like threat of AIDS to the nongay and non-drug-usingpopulace and official statistics on the actual prevalence of thesyndrome, which indicated that no more than 0.02 percent ofpeople who tested HIV positive were not in those risk groups.(And even such heterosexual AIDS cases as are recorded run into aself-reporting problem: many people may not want to admit toanyone that they have had gay sex or used drugs.) As Fumentoexplained, projections of the future growth of the AIDS epidemic(even ones that were not hysterical pure guesses tossed out byinterest groups) were often based on straight extrapolations ofearlier doubling times for the epidemic (which inevitably — for any disease — leadto the absurd result of everyone on the planet and then somedying of the disease) or cobbled together from guess piled onguess. Even when the Centers for Disease Control would lower earlier estimateson the basis of new information, or make clearly unofficialspeculations about higher numbers, journalists would continue toreport the higher and more alarming numbers.

In the case of figures about AIDS in Africa, even the mostbasic numbers are not to be trusted. Journalist Celia Farber documentedin Spin magazine how African health officials inflate the numberof deaths from the complications of AIDS, both because AIDS casesattract foreign aid money, whereas traditional African diseaseand death do not, and because there is no accurate method ofcounting.

One relief worker told Farber that counts of children orphanedby AIDS in an African village “were virtually meaningless, I madethem up myself … then, to my amazement, they were publishedas official figures in the WHO [World Health Organization] …book on African AIDS.… The figure has more than doubled,based on I don’t know what evidence, since these people havenever been here.… If people die of malaria, it is calledAIDS, if they die of herpes it is called AIDS. I’ve even seenpeople die in accidents and it’s been attributed to AIDS. TheAIDS figures out of Africa are pure lies.”

In his autobiography, novelist Anthony Burgess gives furtherinsight into the generation of “official” figures. Hetells of creating completely fraudulent records of the classes hesupposedly taught fellow soldiers while stationed in Gibraltarduring World War II. His bogus “statistics were sent to theWar Office. These, presumably, got into official records whichnobody read.” For the sake of accuracy, we can only hope so.But if a journalist got hold of those numbers, he’d be apt torepeat them.

Similarly farcical figures are taken completely seriously byjournalists. For example, activist Mitch Snyder’s assertion thatthe United States suffered the presence of 3 million homelesspeople became common wisdom for the bulk of the 1980s. Snyder’sfigure was made up; he simply assumed that 1 percent of Americanswere homeless to get an initial number of 2.2 million in 1980,then arbitrarily decided that since he knew the problem wasgetting worse, the number would hit 3 million by 1983. He claimedto be working from extrapolations based on reports from fellowhomeless activists around the country, but there was no counting,no surveying, no extrapolation behind his assertion. And yet most majorAmerican newspapers reported the number; it became part of ourreceived cultural wisdom.

In her recent book, Who Stole Feminism? How Women HaveBetrayed Women, Christina Hoff Sommers actually tried to track totheir sources numbers spread by feminist activists. One of the much- reportedstories she debunked was that 150,000 women a year die ofanorexia, which an outraged Gloria Steinem reported in herpopular book Revolution from Within. Steinem cited another popular feministtome by Naomi Wolf as her source; Wolf cited a book aboutanorexia written by a women’s studies academic, which cited theAmerican Anorexia and Bulimia Center. Sommers actually checked withthat group and discovered that all they’d said was that manywomen are anorexic. Oops.

Another feminist canard is that domestic violence isresponsible for more birth defects than all other causescombined. Time and many newspapers had ascribed that finding to aMarch of Dimes report. Sommers tracked the assertion back through threesources, beginning with the Time reporter, and discovered that itwas the result of a misunderstanding of something that had beensaid in the introduction of a speaker at a 1989 conference — nosuch March of Dimes report existed. Still, the errors of Time andthe Boston Globe and the Dallas Morning News are in more clipfiles and data banks than is Sommers’s debunking. The march ofthat particular error will doubtless continue.

A third famous feminist factoid is that Super Bowl Sunday seesa 40 percent rise in cases of wife beating. That claim, said to besupported by a university study, was made in an activist press conference.(The story was also spread by a group ironically named Fairnessand Accuracy in Reporting.) Similar claims began coming fromother sources. Ken Ringle of the Washington Post took the time todouble-check them and found that the university study’s authorsdenied that their study said any such thing and that the othersources that claimed to have independent confirmation of the “fact” refusedto disclose their data. When a concerned activist makes up anumber, few bother to be skeptical, and credulous reporting tendsto drown out the few debunkers.

Unfortunately, erroneous numbers in journalism are not alwaysthe result of sincere attempts to quantify the relevant data. If youcan’t imagine someone’s making the effort to really countsomething, and if you can imagine any reason for the source’s havingan ulterior motive, best take the number with a large grain ofsalt. This is not a call for ad hominem attacks; it is merely awarning about when to look especially askance at numbers. Evenwhen one is following what seems on its face to be defensible standardsof sample and extrapolation, ludicrous results can ensue. Forexample, Robert Rector of the Heritage Foundation wrote that22,000 Americans below the poverty line had hot tubs, and many conservativepublications uncritically trumpeted the figure. But Rector’sfigure was “extrapolated” from one case in a surveysample. It’s disingenuous to claim that because one poor familyin a sample of 10,000 has a hot tub, 22,000 poor families havehot tubs.

Another example of numbers being attached to the uncounted,and probably uncountable, is the debate over species extinctions.Economist Julian Simon has explained that the conventionally acceptedfigures on the number of species disappearing yearly are based onno counts and no extrapolations from past knowledge; they arebased on guesses about the current rate of extinction, and thatrate is arbitrarily increased to produce the frightening numberof 40,000 per year. Norman Myers, one of the leading promulgatorsof that figure, admits that “we have no way of knowing the actualcurrent rate of extinction in tropical forest, nor can we evenmake an accurate guess.” Yet he is willing to make guessesabout future rates.

Another much-touted scare figure, on workplace violence, wasrecently debunked in the pages of the Wall Street Journal. ReporterErik Larson found that reports and statistics on the prevalenceof workplace violence were shoddy or misleading in variousrespects. One report, which concluded that workers have aone-in-four chance of being attacked or threatened at work, wasbased on the replies of only 600 workers, who represented only 29percent of the people whom the survey had tried to reach, whichmade the groups largely self- selected within the originalsample. Statisticians frown, with reason, on self-selectedsamples, which are very likely to be biased.

Larson also found that a Bureau of Labor Statistics report,which said that homicide is the second most frequent cause of deathin the workplace, far from referring to coworkers or disgruntledex- coworkers blasting away at their comrades, showed that three-quarters of the deaths occurred during robberies, and that manyothers involved police or security guards, whose jobs obviouslyare dangerous. But the media, and an industry of self-servingworkplace violence consultants, inspired by half-understood studiesand vivid memories of crazed postal workers, created an aura ofoffices as the Wild, Wild West that caught the imagination ofmany. In this case, data were not so much bogus or warped aswildly misinterpreted.

Checking the Checkers

It might seem paradoxical to condemn journalists forincessantly parroting errors when it is journalists themselveswho occasionally expose errors. After all, who else would? Theproblem is, they don’t do it nearly enough, and no one else ever does.Even though Larson’s story appeared in the October 13, 1994, WallStreet Journal, it’s a given that many other writers and TVreporters will have missed it and sometime in the future willagain parrot false suppositions about the danger of mortal violencein the workplace.

The culture of journalism is based on the principle of thecitation or quote: if someone else said it, or wrote it, it’sokay to repeat it. Almost any editor or writer would scoff atthat brash formulation. After all, journalists pride themselveson their withering skepticism, their credo of “if yourmother says she loves you, check it out.” But the readerwould be terribly naive to believe that journalists, under the crushof daily deadlines, under the pressure of maintaining long-termrelationships with sources, and occasionally under the spell ofideology, always meet that standard. In the future, you can counton it, someone will go back to some story about workplaceviolence, or the homeless, or wife beating, written before thedebunking was done, and come to an incorrect conclusion. Doggedchecking of sources is rare indeed.

I recently was intrigued by a figure in our self-styled paperof record, the New York Times. In an October 25 article about themiserable state of Iraq after years of international embargo, theauthor, Youssef M. Ibrahim, stated that, according to UNICEF,“in the last year there has been a 9 percent rise inmalnutrition among Iraqi infants.”

That figure struck me as somewhat absurd, a foolhardy attemptto assert precise knowledge in a situation where obtaining it wouldbe extremely difficult, if not impossible. I tried to track thefigure back to its source through the UNICEF bureaucracy. (Thereis a practical reason why many journalists end up acceptingthings at face value: the tracking of figures, especially throughinternational bureaucracies, can be harrying and time-consumingindeed.) I was rewarded; although my initial supposition — thatany alleged count was probably of dubious value — is probablytrue, I discovered that the “paper of record” couldn’teven read the UNICEF report right.

What UNICEF had actually said, with even more absurdprecision, was that the total rate of — not the increase in — malnutritionamong infants under one year old was 9.2 percent — a figure thatseems shockingly low for an essentially Third World countrysuffering under an international embargo. It turned out that thesurvey was not done by UNICEF, as the Times had reported, but byUNICEF in collaboration with the government of Iraq — as almostanything done in Iraq probably must be. Precise figures fromlands with tyrannical governments should never be trusted. And itshould be remembered that in any hierarchy, even if the person atthe top doesn’t have the literal power of life and death over thoseon the bottom, there’s a general tendency to tell those higher uponly what they want to hear.

Given the preceding examples, you’d think that constantchecking and rechecking of the sources of claims would be therule in journalism. Unfortunately, it is not. Nor, apparently, isit in science. In Betrayers of the Truth, William Broad andNicholas Wade reported on fraud and deceit — and acceptance of thesame — in the scientific establishment. They found that, like journalism’sconceit about checking on whether your mother loves you,science’s conceit of being built on an elaborate system of crosschecking and confirming the results of others is mostly a myth.Hardly anyone ever checks what other people claim to have foundor done.

All too often readers assume that everyone is doing his workscrupulously and well, but unfortunately, that’s not always the case,as Broad and Wade, Sommers, Fumento, Larson, Farber, and othershave shown. Readers should be much more skeptical than they are.

Almost every time I read a newspaper story about a topic ofwhich I have personal knowledge, or about an event that I’ve witnessed,I find errors — sometimes in minor details, sometimes in key ones. Almosteveryone I’ve asked about this says the same. But our knowledgeof journalistic error in a few specific cases doesn’t translateinto a strong general skepticism.

Total skepticism is probably impossible. But greater awarenessof the sorts of errors journalists tend to make can only help. Watchout for macroeconomic aggregates; try to figure out where hugecounts are coming from and how they are being made; try to checkthe methodology and phrasing of polls; check on the self-interestof the groups that promulgate scary numbers; and remember thatscary stories make great copy and should be mistrusted all themore for that reason.

If journalism were merely entertainment, this wouldn’t be soimportant. But despite how bad they are at it, journalists’conceit about their key role in public policy is, unfortunately,true. Bad information can only lead to bad policy. The first stepin an intelligent approach to public policy is to get the facts as straightas we can, even when we don’t have precise numbers.