The United States has a long history of trying to improve the achievement and skills of its students, particularly disadvantaged students. Beginning with the War on Poverty that commenced in the 1960s, the United States has significantly expanded funding of students, led by states and localities. But does increasing school funding improve student outcomes? We assembled and compared historical and modern research in an attempt to answer this question.

Historical research showed limited relationships between standard measures of school resources and student outcomes. It was, however, rightfully questioned because of concerns about the quality of many studies and thus the accuracy of the results. Nevertheless, some research introduced credible evidence about the varied effects of increased funding and raised questions about the overall inefficiency of resource decisions.

Modern, improved research techniques have deeply penetrated recent analysis of educational resources and outcomes. These explorations exploit variation in resources from a variety of sources to consider how funding of schools impacts student outcomes as measured by test scores, test passing rates, or continuation in schooling. We have attempted to compile the results of all high-quality analyses that provide direct evidence of the impact of added resources. This search included both published and unpublished studies and analyses from around the world, although our main emphasis was studies of U.S. schools.

It is difficult to make direct comparisons of the results across all the studies, but the analyses of test scores—arguably the most important of the measures—can be most readily compared. The impact on test scores is usually measured in terms of the average change in individual student standard deviations after a 10 percent increase in spending. The 16 studies connecting spending and test scores in the United States have a median effect size of 0.07 standard deviations per 10 percent increase in funding (where a positive standard deviation [SD] means improved test scores), but the study estimates range from −0.244 SD to +0.543. Seven of these studies suggest an effect indistinguishable from zero. Part of the variation in estimated impact simply reflects the imprecision in estimating the role of funding, but half the variation comes from fundamental differences in impact across studies.

The wide variation in underlying effects can largely be attributed to the very different settings in which spending takes place. They range from dramatic changes of a state’s funding formula to recession-induced spending reductions to legislative responses to legal judgments across multiple states to differences in federal compensatory aid for disadvantaged students. Thus, each estimated spending impact applies to specific circumstances. For example, knowledge of the effect of added funds on the achievement of disadvantaged students through the federal Title I program does not necessarily provide information about spending choices if schools were to receive unrestricted funding. Also, the median estimated impact of increased funding on test scores masks substantial differences between age cohorts and subjects. For example, the median test-score impact from the 16 spending studies is less than the impact of unrestricted funding increases for math in early grades but significantly exceeds that for all reading performance measures and for math at age 17.

The spending impact on school attainment level is harder to interpret because attainment ignores quality differences between schools and is dramatically affected by individual responses to differences in the costs and benefits of further schooling. This research more consistently finds positive impacts of spending, but again, these estimated impacts vary widely across studies and are difficult to reconcile with the historical data. Almost 80 percent of the variation in estimated impacts comes from fundamental differences in the effectiveness of spending in different study circumstances. The learning losses during the pandemic make interpretations of these changes particularly challenging because they reveal the significant differences in achievement associated with differences in school attainment.

Modern studies that research the impact of capital investments, class size reduction, and teacher incentives on student outcomes similarly reveal substantial differences between settings. As with the simple spending impact studies, these studies produce a range of estimates—many of which are very imprecisely estimated—but there is no clear answer for when (or if) instituting such policies will improve student outcomes.

This new evidence on spending impacts, like the historical evidence, does not indicate that spending does not matter. It also does not indicate that spending cannot matter. It does indicate that simply adding more resources without addressing how and where the resources will be used provides little assurance that student achievement will improve. Little progress has been made investigating the results to uncover when more spending will have significant impacts and when it will not.

NOTE
This research brief is based on Danielle V. Handel and Eric A. Hanushek, “U.S. School Finance: Resources and Outcomes,” National Bureau of Economic Research Working Paper no. 30769, February 2023.