Artificial intelligence (AI) that would equal the human kind is probably as fanciful as Frankenstein’s creature, but it raises fears. Elon Musk, founder of Tesla, SpaceX, and OpenAI, recently declared that AI is “summoning the demon,” that “robots will be able to do everything better than us,” and that “there should be some regulatory oversight, maybe at the national and international level.”

Regulation of uncertain technological and economic change is an old solution. Nearly a century ago, Rexford Guy Tugwell, a progressive economist fascinated by government planning, proposed to control the rise of any new industry. “New industries will not just happen as the automobile industry did,” he wrote in 1932; “they will have to be foreseen, to be argued for, to seem probably desirable features of the whole economy before they can be entered upon.” (See “Total Regulation for the Greater Whole,” Fall 2014.) He probably imagined committees of bureaucratic experts and assemblies of politicians exercising the “proactive regulation” that Musk is now calling for. What could go wrong?

No jobs? / The big fear for many AI critics is that robots could displace large numbers of human workers, resulting in massive unemployment and poverty. But future technological progress will probably resemble what has happened previously. Technological progress in a given industry increases labor productivity, machines incorporating the new technology partly substitute for workers, and—other things being equal—fewer of the latter are employed in the industry. However, displaced workers move to other industries, creating jobs elsewhere in the economy. Because higher productivity—producing more with the same resources—means higher living standards, more technology will generate higher incomes and wealth. Most people will benefit from technological advances, just as we have since the Industrial Revolution.

Contrary to Luddite fears, the experience thus far is that technological progress increases employment opportunities as it raises incomes. For example, close to 12 million Americans worked in agriculture in 1910 (the year when agricultural employment reached its peak), but only 2.5 million do so today. In the meantime, the total number of jobs in the American economy increased from 37 to 151 million.

More recently, the number of jobs in manufacturing dropped from its peak of nearly 29 million in 1979 to about 12 million today, while total employment in the economy increased from 99 to 151 million. Agricultural technology was a continuation of what can be called “the first machine age,” which followed the Industrial Revolution; recent computer technologies in manufacturing are part of the “second machine age,” as MIT economists Erik Brynjolfsson and Andrew McAfee have dubbed it. (See “Pinging the Robot Next Door,” Summer 2014.)

A recent paper by David Autor (MIT) and Anna Salomons (Utrecht University School of Economics) provides more general empirical evidence that computers and robots do not threaten employment. Using a dataset covering 19 countries (the United States, Japan, and several Western European countries) over 37 years (1970–2007), Autor and Salomons find that technological progress has raised labor productivity and reduced jobs in the industries directly affected, but the resulting higher incomes in those industries generated offsetting jobs in other industries. They conclude that “productivity growth has been employment-augmenting rather than employment-reducing,”

They do observe that since the 2000s the “virtuous relationship” between technology and jobs seems to have weakened. In a few countries—the United States, Japan, and the United Kingdom—technology has destroyed more jobs than it has created. But they note that the 1980s were also less typical of the virtuous relationship, but then it reasserted itself. At any rate, the data for the 2000s only go up to 2007, in a period marked by “unusual economic conditions leading up to the global financial crisis.” We might add that even with the 2008–2009 recession, 15 million more Americans are employed now than in 2000.

It can be shown that the main factor in employment growth is population growth. And we must not forget that income (and self-reliance or autonomy) and welfare are what’s important in individuals’ preferences, not jobs and sweat. Because technological progress allows people to get more by working less, it is a blessing, not a curse.

Inequality issue / It is true that the new jobs created by technological progress require more skills, mainly in terms of knowledge, than the ones that have been eliminated. As documented by Autor and Salomons, the result has been much lower growth for mid-skill (and, in most other countries, low-skill) employment than high-skill employment. One result, they remind us, is that “the real wages of less-educated workers in both the United States and Germany have fallen sharply over the last two or three decades.” This polarization of the labor market has increased economic inequality.

The inequality issue, however, should not be exaggerated. Any change generates disruption, and it is not surprising that digitization, automation, and the onset of AI should have significant effects. As time passes, individuals will invest more in their education and the problem of low-skill and low-pay workers should solve itself. Progress happens through disruption. The intervention of government planners would only slow technological progress, reduce incomes (compared to what they could have been) for educated workers, reduce the incentives to invest in one’s human capital, and—to borrow a mantra of social justice warriors—harm future generations.

Like any fear, the fear of robots serves as an excuse for government intervention. One proposal has recycled the idea of a “basic guaranteed income,” presumably financed by a tax on robot owners and (because this proposal is very expensive) on educated and skilled workers. This would not help motivate people to invest more in their own human capital.

THE REAL PROBLEMS

As far as catastrophic scenarios go, the standard science fiction tale is that intelligent robots would control humans—like politicians and bureaucrats now do. It is not clear how government regulation would reduce alleged harms. On the contrary, decentralization and competition would foster new ideas for robot control and provide better protection if some robots turn against humans. Individuals, not the collective, should be the robots’ masters.

Government intervention would only slow innovation, reduce worker incomes, reduce incentives to invest in human capital, and harm future generations.

The imagery of menacing robots with arms and legs, like in the Terminator movies, may sometimes hinder clear thinking on this whole topic. Some robots do have arms and legs, but they are defined by the software that runs them. Government control of robots would mean government control of software development.

The most serious danger with technological change is the possibility that the state will control the new technologies. The bureaucratic and political committees overseeing technological change could very well be captured by the elite (as so many government activities are) and abolish competition against the robot owners. Government could cap the number of robots or impose a license requirement on robot owners, using the excuse of protecting workers and consumers of course. Government intervention is more likely to create a class of robotless proletarians than to prevent it.

Conclusion

One specific danger requires further comment: the development of robots as machines of war (or, who knows, for federally subsidized SWAT teams). As Air Force general and current Joint Chiefs of Staff vice-chairman Paul Selva testified in a Senate Armed Services Committee hearing last July, “I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life … because we take our values to war.” He is right, of course, even if barbarian enemies were not to follow the same rules. But this does not require politicians and bureaucrats to take control over all robots; it just requires control over governments that use robots.

A more mundane danger is that governments use intelligent software to increase surveillance and control over their citizens. Spy agencies have already been doing this, ostensibly for our own good. Some courts use a form of AI to identify criminals who might re-offend, a process that, despite all its statistical bells and whistles, comes close to punishing pre-crimes like in Steven Spielberg’s 2002 film Minority Report. Government control of private technological development would only compound these dangers.

In short, robots are much less scary by themselves than if they are controlled by politicians and bureaucrats. The Luddites are wrong; this new technology is no different from past innovations that they decried. And neither is the danger of government.