As a result, big companies experienced increased sales, employment, markups, and profitability during the period examined, while smaller ones experienced just the opposite: “The smaller the firm, the more competitively disadvantaged it gets, and vice-versa.” Various markets also saw fewer entrants, lower productivity, and less investment by the late 1990s and thereafter—right as smaller firms started taking a higher regulatory hit than their larger competitors. Overall, he finds that “increased regulations can explain 31–37% of the rise in market power” among large U.S. companies during the last few decades. Singla also documents the political side of rulemaking: “While large firms are opposed to regulations in general, they push for the passage of regulations that have an adverse impact on small firms. … Hence, they are willing to incur a cost that creates a competitive advantage for them.”
Someone tell Sen. Durbin!
Anyway, previous studies have come to similar conclusions on the chilling effects of regulation on market entry and innovation, often finding big business lobbying behind the scenes. Bailey and Thomas, for example, found in 2017 that “more-regulated industries experienced fewer new firm births” between 1998 and 2011, and that “large firms may even successfully lobby government officials to increase regulations to raise their smaller rivals’ costs.”Calomiris and colleagues in 2020 found that higher regulatory exposure generally harms firms’ financial performance, but that “these effects are mitigated for larger firms.” And Calcagno and Sobel (2014) calculated that regulation “appears to operate as a fixed cost” that favors larger firms over smaller ones. This, again, is common sense: If all companies pay around the same price for regulatory compliance, the companies that make more money will suffer relatively less.
Summing It All Up
There are, I think, plenty of reasons to be skeptical of new AI regulation. For example, various technologists have pushed back on the idea that the technology, even in much advanced form, will have the power and potential to pose an existential risk to humanity. (Many have rightly noted, moreover, that current technology really isn’t “AI” at all.) And, yes, malicious actors may abuse the technology, but there are obvious and non-obvious ways that others can work to counter such baddies—including via the same technology (sorta like how your spam filter battles spambots). For those interested, Adam Thierer of the R Street Institute has a great, frequently updated primer on much of this discourse.
Meanwhile, the potential benefits of the actual ChatGPT and similar technologies—not their demonic potential selves—could be seismic in a wide range of fields. As economist Tim Taylor recently detailed, for example, the early research on AI shows tremendous upside for the productivity of workers—especially less-skilled ones—and economic growth. (He also noted, as I did a few months ago, just how hysterical and overambitious past predictions of tech-related doom have been.) AEI’s Brent Orrell notes similar benefits, and the technology is already revolutionizing medicine in various ways (e.g., cancer detection). Regulation could—consistent with past research—stifle these gains, and past attempts to regulate scary stuff (e.g., nuclear weapons) haven’t fared very well.
Thierer adds, moreover, that the U.S. tech sector owes its world-beating status to a “permissionless innovation” approach that other nations, particularly in Europe, have eschewed to their detriment. And he’s right to note that countries like China aren’t going to slow down their AI efforts just because we have. Other observers, such as former FTC official and current “innovation evangelist” Neil Chilson, have pointed out that the same people who wanted a new “digital regulator” two or three years ago are doing the same dance today for AI—even with the same legislation. (Same goes for Section 230 and content moderation.) Sure looks like they’re just groping for problems that can justify pre-existing government solutions.
People are free to disagree, and—as already noted—it’s a complicated issue with reasonable counterarguments to some of the points above. As Eli Dourado notes, moreover, Sam Altman’s financial stake in OpenAI may mean he’s sincere in his views about the need for AI regulation (though he can still be wrong and can still have other, non-financial motivations for stifling his competition). But that doesn’t mean it’s no less wrongheaded—and it certainly doesn’t justify new federal law. Let that be decided on the merits, not some absurd conception of big business and “history.”
Charts of the Week
Huh.