In the 1970s, the EPA proposed a risk assessment and regulations for a number of industrial solvents based on toxicity tests on laboratory animals (rats and mice). The EPA assumed that the animal results could simply be “scaled up” mathematically to estimate toxic levels for humans, and those calculations would thus yield meaningful regulations. In addition, because the lab experiments entailed a degree of uncertainty about the toxicity levels for the broader rat and mice populations, the EPA assumed the upper end of the uncertainty range when making those calculations.
Regulation requires that exposure levels for chemicals be kept low enough that the calculated risk (under uncertainty) of premature death be below 1‑in‑1 million over a lifetime. If one takes the average uncertainty of the animal experiments rather than the upper limit, this requirement implies a 1‑in-10 million lifetime risk. A single dose of two cigarettes in a lifetimes would satisfy that criterion! For another comparison, the risk that a wayward asteroid strikes the earth is just somewhere between 100 and 1,000 times greater. Another comparison is the risk posed by a number of naturally occurring chemicals, such as arsenic or mercury; that risk is about 10,000 times the EPA’s permitted level.
Ideally, we should use a cautious approach when regulating health and safety, and regulate in a pessimistic manner. If further research indicates that the calculated risk is too pessimistic, then it could be lowered. But that has not happened.
The risk assessment for industrial solvents was challenged by academics as early as 1979 as being arbitrary and capricious, and therefore illegal. In principle, industry could use that academic judgment to challenge the EPA in court, but industry does not have the stomach for such a suit.
If the 20 or so industrial solvents from 1979 are regulated in such an arbitrary manner, what about other chemicals? Currently, there is no good way to establish the risk posed by most of the 80,000 chemicals that are used in American commerce. Only a very few (20 or so) of those chemicals have been directly studied in people, so that those chemicals’ exposure risk has been established with some reliability Only a few thousand have been measured in laboratory animals, and only a few hundred measured carefully enough that a reasonable attempt can be made at using the animal results to estimate human toxicity levels. Of the remaining 60,000 or so chemicals that haven’t been studied, we know almost nothing.
What should we do in the absence of such knowledge? And what has the EPA done? We can perceive the EPA position if we consider its response to the U.S. decision over a decade ago to dispose of nerve gases such as sarin. The proposed disposal procedure is incineration. But what about the risks posed by burning some of the most dangerous chemicals known to man? To determine that risk, a test burn was conducted and the resultant chemicals examined. Of those products, only a handful were on the EPA’s list of toxic chemicals (the Integrated Risk Information System), so the EPA set the risk from exposure to the burn chemicals at ZERO! Yet a moment’s thought should tell us that the combustion products are likely to be more dangerous than anything on the EPA list! In 2002, I was asked by the U.S. Justice Department to defend that risk assessment in court; needless to say, I declined.
Currently, there is no good way to establish the risk posed by most of the 80,000 chemicals that are used in American commerce.
This discussion is especially topical today. The January chemical leak near Charleston, W.Va., that entered the Elk and Kanawha rivers affected hundreds of thousands of people. At least one of the involved chemicals is among the 60,000 chemicals for which the EPA has no meaningful risk assessment. As a result of the leak, the media devoted some attention to how little we know about the consequences of exposure to many substances used in commerce. But none of those reports called attention to the complete failure of the EPA and other agencies to address the lack of information logically and scientifically. When the public is asking, “Is it safe?” and they’re answered with talk about one-in-a-million lifetime risk of premature death—as if such a low lifetime risk estimate were even meaningful—is to mislead the public badly.
Of course, all of this raises an important question: what is it to be “safe enough”? The best response to that question can be found in the late Aaron Wildavsky’s 1979 paper, “No Risk Is the Highest Risk of All” (American Scientist 67 [1]: 32–37). He argued that attempts to bring one risk down to zero inevitably involve actions that increase other risks. It is obvious that whenever one discusses the future, there is uncertainty, which must enter into the discussion. But the media duck this question. Official responses are no better.
Americans deserve better from their government. The EPA should have a sound, logical, and scientific justification for its chemical exposure regulations. As part of that, agency officials need to accept that they are sometimes wrong in their policymaking, and that they need to change defective assessments and regulations.
The first head of the EPA, William Ruckelshaus, appointed by President Richard Nixon, tried to get the agency to pursue such sound, logical, and scientific policymaking. When President Ronald Reagan bought him back to the agency, he redoubled those efforts, understanding the problems that the EPA was experiencing. But the bureaucracy has been hard to change.
Over the years, Regulation has published several papers on this subject; I agree with some and disagree with others. Outside of Regulation, I find few meaningful discussions. I applaud Regulation for raising this topical issue in print. Maybe those discussions will force a change. It may not be too late.