I’d missed something about the new Head Start Impact Study until this morning. It reports 44 cognitive test results, only one two of which were statistically significant at the end of 1st grade. The thing is, a certain number of apparently significant results are to be expected merely by chance, and the probability of these false positives grows in proportion to the number of tests you report.


Statisticians use a variety formulas to control for the expected proliferation of false positives when multiple results are reported, and even if we apply a very forgiving control (the Dubey and Armitage-Parmar procedure with an assumed average correlation among results of .8), the two marginally “significant” Head Start result become, you guessed it, insignificant. (If we were to apply the very conservative Bonferroni correction, these marginal results would be savagely beaten, buried in concrete, and dropped into the Mariana Trench.)


In short, this very high quality study of a very large national sample of students reveals absolutely no evidence of statistically significant cognitive benefits to Head Start at the end of first grade. None.


To their considerable credit, the authors acknowledge this issue in footnote 99 on page 6–2 of the full report, linked above, and in the notes to their results tables in section 4 of the report, on cognitive outcomes. Indeed when they apply their own choice of control for false positives due to multiple tests (Benjamini-Hochberg), they, too, find that none of the cognitive effects holds up. Kudos.