Once you start thinking about growth, it’s hard to think about anything else.
— Robert Lucas, Nobel prize–winning economist

Economic growth is transformative, yet only now are we beginning to understand it. Rising gross domestic product (GDP) can stem from an increase in population or hours worked. Rising GDP per capita can also be driven by more capital per worker or by growing markets with more specialization (sometimes called Smithian growth). But only technical change provides innovation and Schumpeterian growth, which gave us the Industrial Revolution and ever-rising living standards. Despite this form of growth being the central source of our prosperity, our knowledge of it remains in its infancy. This paper describes the seven stages of understanding of growth driven by innovation.

The First Stage: Francis Bacon (1561–1626)

Francis Bacon, an English lawyer and politician who is still remembered as the first modern philosopher of science, was the person who—in an era when people still believed history was circular—first conceived the idea of progress, which he called “progression.”1 He defined this as the addition of new knowledge to old.

Bacon also suggested that advances in knowledge drive economic growth, which was a prescient suggestion to have made at the beginning of the 17th century; yet his intuition was confirmed empirically during the 20th century by economist Robert Solow who, in a celebrated study, observed that the capital invested per U.S. worker between 1909 and 1949 barely increased but that over those 40 years, the United States’ per capita output doubled. Solow attributed this doubling of productivity largely to advancements in technological knowledge,2 and considerable empirical evidence has since accrued to confirm that economic growth flows mainly from technical advances (in the widest sense, including advancements in health, management techniques, and allied factors). In turn, technical advances flow in great part from research and development (R&D).

The Second Stage: Francis Bacon (Again)

Bacon described progress and growth as emerging by advances in knowledge, but in doing so, he also introduced a long-lasting error. Bacon wrote that “the benefits of discoveries may extend to the whole race of man … through all time,”3 which—while obviously true—also suggests that no individual could appropriate the full benefits of research in new technologies. Consequently, Bacon argued, no individual would pay for that research: “There is no ready money.”4 Only governments, Bacon suggested, would fund research: “There is not any part of good government more worthy than the further endowment of the world with sound and fruitful knowledge.”5 For Bacon, therefore, research was (to use modern language) a public good.

Several theoreticians, including Friedrich List6 and John Stuart Mill7 echoed this argument over the ages. Today—updating Bacon—research is often characterized as being both non-rivalrous and only partially excludable. Consequently, theoreticians will depict research as a prisoner’s dilemma, where players’ dominant strategy is to defect or free ride.8

The Third Stage: Adam Smith (1723–1790)

Unlike Bacon, Adam Smith experienced the early stages of the Agricultural Revolution and Industrial Revolution, and he saw that Bacon was empirically wrong. Great Britain subscribed to laissez faire (the state did not fund civil scientific or civil technological research), yet British technology reigned supreme. Why? Because the market supplied it:

If we go into the workplace of any manufacturer and … enquire concerning the machines, they will tell you that such or such a one was invented by a common workman.9

Smith’s argument was echoed over the next two centuries by other observers, including Karl Marx and Friedrich Engels, who, in their Manifesto of the Communist Party of 1848, wrote of the bourgeoisie (i.e., the market):

The bourgeoisie, during its rule of scarce one hundred years, has created more massive and more colossal productive forces than have all the preceding generations together.10

Meanwhile, Joseph Schumpeter in 1942 wrote the following:

Industrial mutation incessantly revolutionizes the economic structure from within [Schumpeter’s italics].11

The empiricists, therefore, found little evidence that governments need to fund research (at least not for economic reasons).

The Fourth Stage: The Second World War, the Cold War, and Vannevar Bush (1890–1974)

By 1941, the United States had been, for half a century, the richest and most technologically advanced country in the world, having overtaken British GDP per capita around 1890. And, like the United Kingdom, the United States was laissez faire in scientific and technological research, both of which it entrusted to the market and to civil society. In the United States, therefore, as in the United Kingdom, the empiricists were vindicated, for both countries were to be the world’s great technological and economic powers.

The empiricists were further vindicated by the relative failures of the dirigiste nations, including France and the German states, whose governments funded research systematically yet which failed—contrary to the lobbyists’ myths—to converge on the United Kingdom, let alone on the United States. In both 1914 and 1939, France and Germany each enjoyed significantly smaller GDPs per capita, GNPs per capita, and lower rates of industrialization than either the United Kingdom or the United States.12

It was war that pushed the British and U.S. governments into funding research. For example, in 1941 the United States embarked on a vast government-funded program of scientific and technological defense research that culminated in the triumph of the Manhattan project. Vannevar Bush, the head of the Office of Scientific Research and Development, oversaw that project and had no reason to hail the resumption of peace, for he feared it would lead to the demobilization of the vast corps of government-funded scientists he had recruited. In 1945, therefore, Bush wrote Science, the Endless Frontier to urge the federal government to fund science in peacetime as it had in wartime.

Bush sourced his arguments from Bacon’s 1605 The Advancement of Knowledge, in which Bacon suggests that pure science underpinned applied science. But because no individual, Bacon argued, could appropriate its full benefits, pure science was particularly dependent on government money, so Bacon (and latterly Bush) proposed the linear model of economic growth.

Government funding → science → technology → economic growth

The Fifth Stage: The RAND Corporation, Richard Nelson, and Kenneth Arrow

The problem with Bush’s linear model was that the empirical evidence did not support it in 1945 any more than it had in 1605. Even in 1776, Smith, for example, on visiting the factories and universities of his day had found that preexisting technology was the root of both new technology and of new science:

The improvements which in modern times have been made in several different parts of philosophy [pure science] have not, the greater part of them, been made in universities [they’d been made in industry].13

The universities make a powerful lobby, and though they’ve long persuaded the general public of the veracity of the linear model, practitioners today follow Smith in recognizing science and technology simply as different hubs within a complex network of knowledge—a network that is characterized by mutually reinforcing feedback loops between multiple sciences and multiple technologies—which, under laissez faire in both the United Kingdom and the United States, had been funded by the market.14

The historical evidence of scientific success under laissez faire in the United States was a problem for Bush, so in 1946 he recruited two allies, the Douglas Aircraft Company and the U.S. Army Air Forces, to found Project RAND, later the RAND Corporation, to join him in lobbying for the government funding of science. RAND turned from history (which was full of embarrassing facts) to economics (where the heyday of “blackboard economics,” later much criticized by economist Ronald Coase, had dawned).

RAND thus employed two talented economists, Richard R. Nelson and Kenneth Arrow, to argue that the fact that private businesses invested in R&D proved that the government should fund it instead.15 In a foundational paper of postwar thinking on the subject, Nelson wrote:

The fact that industry laboratories do basic research at all is itself evidence that we [i.e., the government] should increase our expenditure on basic research [italics in the original].16

Nelson (a later critic of orthodox neoclassical economics and advocate of evolutionary approaches17) argued that only perfect markets achieve a Pareto optimum and that every deviation from market perfection was Pareto suboptimal. Therefore, if industry laboratories do research, it is because they seek market power, a situation incompatible with Pareto optimality. In Nelson’s words, organizations with market power must “undermine many of the economic arguments for a free-enterprise economy.”18 Through the distorting lens of contemporary economic theory, research expenditure by private businesses could thus be viewed as weakening rather than reinforcing the case for free enterprise.

But this argument is misleading because perfect markets in equilibrium are either static or merely growing at a rate sufficient to keep a rising population at a constant level of output per head. A rising standard of living requires either a continuous increase in the propensity to invest (which diminishes its effectiveness with a given technology) or improvements in technology and organization (responsible for most of the increases in labor productivity observed historically). The latter can emerge only in markets that are not perfectly competitive. Technological change is associated with markets that are competitive in the classical sense of being rivalrous and innovative. These markets, where monopoly power is transitory and always subject to erosion, generate new knowledge and the growth associated with it.

The Sixth Stage: Paul Romer (1955–) and Endogenous Growth Theory

The Nelson-Arrow thesis was obviously misleading, so in an important corrective, economist Paul Romer developed mathematical models to show how research could be funded by private entities within monopolistically competitive markets.19 These models fed the theory known as endogenous growth.

Though endogenous growth theory was clearly an improvement on ideas of Pareto optimality under perfect markets, Romer’s mathematical models were as ahistorical as Nelson’s and Arrow’s, for they were not based on empirical studies on how research operates in practice. Rather, Romer’s models were purely theoretical, and he reiterated Bacon’s view that—if economic growth is to be optimized—governments need to subsidize research:

Too little human capital is devoted to research. The most obvious reason is that research has positive external effects. An additional design raises the productivity of all future individuals who do research, but because the benefit is non-excludable, it is not reflected at all in the market price for designs.20

Consequently, Romer argued that R&D required government funding to subsidize the development of new designs in order to correct for underprovision. Economist Charles Jones, Romer’s former colleague at Stanford, has suggested that optimal R&D investment is two to four times the recorded levels of actual investment.21

Although explicitly dynamic, these approaches to the problem retain many of the characteristics of the static reasoning criticized above. They compare what actually exists with an ideal “balanced growth path” that might be chosen by a “social planner” somehow equipped with the necessary information. The analysis is thus entirely “institution-free.” Ideas emerge from “production functions” requiring resource inputs rather than from entrepreneurs competing in the face of radical uncertainty. In fact, ideas are generated within firms, clubs, networks of individuals, universities, charities, think tanks, and research foundations. These institutions are themselves private “civil society” responses to transactional hazards. If there really are large social gains available from more research effort, it might not be true that the social planner is in a better position than a competitive entrepreneur to discover them or that the transactional hazards of discovering them effectively rule out private responses. It is this closer attention to institutional change and the emergence of spontaneous cooperative effort in R&D that underlies recent work.

The Seventh Stage: The Contribution Good

From Bacon to Romer, theoreticians have argued that because knowledge is nonrivalrous but not fully excludable, actors will fail to appropriate its full benefits and will underinvest in it. We would therefore expect actors to seek excludability through secrecy; and before the Scientific Revolution and Industrial Revolution, researchers were indeed secretive. Thus, Bacon encouraged researchers to take an oath of secrecy. Moreover, researchers published secretly in anagrams. For example, Hooke published his law of elasticity as “ceiiinosssttuv.” Other researchers might lodge their findings with a lawyer or university to reveal them and thus claim priority only when a later competitive publication arose.

The Scientific Revolution and Industrial Revolution, however, were characterized by a move to cooperation and openness as researchers came together to share knowledge in societies such as the Royal Society (1660), the Society for the Encouragement of Arts, Manufactures, and Commerce (1754), and the Lunar Society (1765). In a further discrediting of the linear model, all three of those societies—which were typical among the host of other societies in Great Britain during the Industrial Revolution—extended their memberships to industrial technologists as well as to gentleman scientists.22

The purpose of those societies was to encourage competitors not to be secretive but, rather, to share knowledge. In 1909, for example, the British steelmaker Sir Gerard Muntz explained that his industry had created the Institute of Metals because of the following:

Each individual has some cherished bit of knowledge, some trade secret which he hoards carefully. Perhaps by sharing it with others he might impart useful information; but by an open discussion and interchange he would, almost for certain, learn a dozen things in exchange for the one given away. [Muntz’s italics].23

And, in his classic paper, economist Robert C. Allen shows how collective invention was the dominant form of innovation in the British steel industry of the time:

If a firm constructed a new plant of novel design and that plant proved to have lower costs than other plants, these facts were made available to other firms.24

Allen, moreover, found that there was nothing unusual about the British steel industry of the day, and he concluded that collective invention remains “rampant today” among all fields of technology. A host of other scholars have echoed Allen’s findings. Thus, Thomas J. Allen and others found in an international survey of 102 firms that no fewer than 23 percent of their important innovations came from swapping information with their rivals:

Managers approached apparently competing firms in other countries directly and were provided with surprisingly free access to their technology.25

Meanwhile, economist Eric von Hippel, in a survey of 11 steel companies in the United States, showed that 10 of them regularly exchanged proprietary information.26 And economist John T. Scott found the following:

Cooperative R&D ventures do not appear to be a way for companies to avoid Schumpeterian competitive pressures that stimulate R&D investment.27

Meanwhile, the Financial Times reports how precompetitive research remains cooperative to this day. Dr. Mark Mintun, a vice president for R&D at Eli Lilly and Company, explains why the different drug companies, for example, have congregated in roundtables to research Alzheimer’s disease:

It is hard to find an example of such a major disease in which pharmaceutical companies and academic scientists collaborate more than we do in Alzheimer’s… . Companies had clinical data that individually they might struggle to make sense of, but when they shared this, we could quickly understand what it meant and make sure we rapidly gained understanding.28

The Homebrew Computer Club, out of which Apple and many other Silicon Valley startups came during the 1970s, speaks—as does the free-software movement—to the continued vigor of collective invention today. Collective invention, in short, is privately profitable. How?

If we follow economist W. Brian Arthur in modeling growth as the consequence of a rearrangement of ideas, the sharing of ideas becomes propitious.29 If 10 individuals each produce one idea and if those 10 individuals each share their ideas, then—for the cost of generating only one idea—each individual gets access to many potential combinations of new ideas. For example, there would be 10!/(10 − 3)!3! or 120 possible combinations of 3 out of the 10 contributions and 252 ways of combining 5 together. These different potential combinations could supply each of the 10 individuals with much personalized scope for private profit.

But how does a player access the knowledge of others? Only by doing his own research. Much scientific knowledge is tacit (not capable of being understood simply by reading but only by continual experience or social interaction); therefore, only active researchers possess the tacit knowledge by which to assess the research of others.30 This degree of excludability, coupled with the access to others’ knowledge, provides a private incentive to do research.

We can therefore remodel the research game from a prisoner’s dilemma into one in which players can access the research of others only if they make research contributions of their own. We call research in this new model a “contribution good.”31 Participants no longer fear that their ideas will be picked up and used by others. Rather, they fear that other potential contributors will not be sufficiently numerous to create a vibrant and potentially profitable field. Participants will welcome newcomers to a growing technical field rich in spillovers.

This mathematical formalization of research as a contribution good reflects researchers’ behavior in real life. The sociologist Robert Merton, for example, characterized research with the four imperatives, including communism, universalism, disinterestedness, and organized skepticism (CUDOS), where communism represents the mutual contribution of knowledge and disinterestedness represents Merton’s belief that researchers apparently act selflessly.32 The Merton paradox (for researchers are as self-interested as anyone else) is resolved by understanding research as a contribution good, where researchers do work to acquire the tacit knowledge by which to access the research of others, and it’s the hope of copying that incentivizes research; it doesn’t disincentivize it.

Conclusion

The elaboration of the contribution good helps to explain how the spontaneous development of research activity during and after the Industrial Revolution was possible without governments funding research. This does not mean that governments should not fund research. There will always be areas of science (such as the study of orphan diseases) where the market will fail individuals, and there will always be areas of science (such as the study of tobacco or nutrition) where market players should be confronted. Moreover, the contribution-good model predicts that less-talented individuals will be deterred from entering research even though more-talented researchers might—to society’s benefit—have profited from their contributions.

Further, although groups of collaborators might be tempted to deny information to those outside in order to buttress monopoly power in the short run, a cost is attached to such a policy—the inability to access technical advances made by others. The contribution-good framework thus helps to explain the transitory nature of monopolies in classically competitive markets where secrecy and isolation result in waning competitive advantage while openness holds the prospect of continual renewal.

The characterization of research as a contribution good does not, therefore, prove that markets will always generate the optimal amount of economic growth. But the contribution good is more compatible with historical experience than are other models.