Public interest advocates suggest that OIRA is mostly attentive to the concerns of business and therefore attempts to weaken rules by delaying their implementation, diluting their provisions, and substituting its own judgment for that of the initiating agency. On the other hand, conservatives complain that OIRA merely rubber-stamps agency action, delaying but rarely vetoing rulemakings that would fail a cost-benefit test.
Neither side currently has overwhelming empirical evidence to support its position. What quantitative research there is on OIRA’s effect on proposed rules tends to focus on the final finding of the review, whether “approved without change,” “approved with change,” or (rarely) “returned” to the agency for further analysis. The Center for Progressive Reform, a pro-regulation group, found that OIRA changed up to 84 percent of health and safety rules. Regardless of whether such changes are decried for reducing safety or hailed for promoting efficiency, existing research offers some support for the view that OIRA does revise the content of proposed rules.
But those studies do not offer a means to assess the practical effect of the OIRA alterations. Simply recording percentages of rules “changed” by OIRA fails to account for what sort of changes are made. “Approved with change” is a broad category that could contain anything from minor technical tweaks to removal of entire provisions. Claims that OIRA consistently deflates benefits, inflates costs, and hollows out public health and safety measures ignore the diversity and complexity inherent in a system producing thousands of rules each year. While critics can point to notable examples when OIRA review did produce such changes, it is questionable whether those anecdotes constitute a pattern.
To develop a better understanding of OIRA’s effect on rulemaking, we conducted a review of the changes in cost and benefit estimates between proposed and final rulemaking stages. This study provides a better measure of the gravity of rule changes and adds empirical grounding to a debate driven overwhelmingly by competing anecdotes.
Changes in Cost / We analyzed 160 final rules (excluding routine Federal Aviation Administration regulations) published in 2012 and 2013 that underwent some form of review. OIRA reviewed 111 of those rules, while independent agencies produced (and reviewed) the other 49. According to our analysis, the average net change in cost between the proposed and final rules was an increase of $137.1 million. The average percent change was an increase of 401 percent. However, a few rules with dramatic cost increases artificially elevated the averages.
A plurality of rules had increased costs: 74 (46 percent) had higher costs in the final stage than when originally proposed; 46 (28 percent) had lower costs; and 40 had no change (25 percent). In the aggregate, the positive changes represented increased costs of $35.6 billion (an average change of $481 million) while the negative changes decreased costs by $13.7 billion (an average decrease of $297 million). Thus, while cost estimates are more likely to increase than decrease during the rulemaking process, the direction of change is not exclusively positive.
Differences between the proposed and final rule can arise from a variety of factors. Changes may be caused by the addition of analyses not present in the proposed form (such as determining “major” rule status under the Congressional Review Act). Or analyses may have been present in the original edition of the rule, but were updated as new information became available. In either scenario, the actual content of the rule is unchanged, with the increase in costs reflecting procedural decisions by agencies and OIRA. Finally, cost and benefit changes could reflect substantive alterations to the actual content of the rule. Because of those different possibilities, it is unclear whether increasing cost estimates should be applauded as a more accurate accounting of the rules’ effects or decried as a gradual expansion of rules’ reach over time.
Cost Changes by Agency / When possible, net changes were considered on an agency-by-agency basis. Few agencies produced enough final rules in the period studied to allow for meaningful conclusions, but financial regulators (in the Consumer Financial Protection Bureau, Commodities Futures Trading Commission, Securities and Exchange Commission, and Department of the Treasury), health care regulators (in the Department of Heath and Human Services and the Centers for Medicare and Medicaid Services) and the Environmental Protection Agency all had sufficiently large sample sizes (more than 25).
The first column of Table 1 presents the raw average percentage change in a proposed rule’s estimated cost. The next column, “adjusted average,” excludes the three rules with the largest percent changes in each category, so as to better capture the typical change.
The difference between the two columns confirms that the data are subject to a large amount of variability. The finance rules had the largest disparity in percentage changes, as shown by the dramatic reduction between the average and the adjusted average. The dispersion in the other two areas was not nearly as pronounced.