Over the years, countless reporters and even policy analysts have attempted to draw conclusions from changes in state SAT scores over time. That’s a mistake. Fluctuations in the SAT participation rate (the percentage of students actually taking the test), and in other state and student factors, are known to affect the scores.


But what if we could control for those confounding factors? As it happens, a pair of very sharp education statisticians (Mark Dynarski and Philip Gleason) revealed a way of doing just this—and of validating their results—back in 1993. In a new technical paper I’ve released this week, I extend and improve on their methods and apply them to a much larger range of years. The result is a set of adjusted SAT scores for every state reaching back to 1972. Vetted against scores from NAEP tests that are representative of the entire student populations of each state (but that only reach back to the 1990s), these adjusted SAT scores offer reasonable estimates of actual changes in states’ average level of SAT performance.


The paper linked above reveals only the methods by which these adjusted SAT scores can be computed, but next week Cato will publish a new policy paper and Web page presenting 100 charts—two for each state—illustraing the results. How has your state’s academic performance changed over the past two generations? Stay tuned to find out…


Update: Here’s the new paper and charts!