One of the most frequent and less serious criticisms that comes across my desk is that immigration is bad because non-citizens vote illegally in such large numbers that sway elections. A new report by James D. Agresti, pushed by some news outlets, argues that the number of noncitizens who illegally voted in 2020 substantially increased Biden’s vote share but did not affect the outcome of the election. It has been illegal for non-citizens to vote for federal elected officials since 1996, so these noncitizen voters would all be breaking federal law. Is the Agresti paper reliable? Are large numbers of noncitizens voting in federal elections to such an extent that several states voted for Biden as a result?
No, but to understand why you have to follow how the Agresti paper arrived at its conclusion. The Agresti report relies on a peer-reviewed academic paper by political scientists Jesse T. Richman, Gulshan A. Chattha, and David C. Earnest that was published in 2014 that estimates the rate at which noncitizens voted for president in 2008. Their paper relies upon responses to the Cooperative Congressional Election Study (CCES) for the 2008 election that found a substantial proportion of noncitizens voted in that year. The Agresti paper combined two figures from the Richman, Chattha, and Earnest paper to get their primary estimate that 15.8 percent of non-citizens voted in 2008. Agresti then apples that 15.8 percent rate to the non-citizen population in swing states in 2020 to reach their conclusion.
The big problem, as explained in two succinct pieces, is that non-citizens voting illegally is a small subset of a small population of Americans measured in the CCES survey. In the CCES survey, as in any survey, a certain number of respondents click the wrong box. Thus, some respondents will incorrectly click that they are non-citizens by accident and that they voted. Or they will make any number of other errors. This general problem is called measurement error and it afflicts every survey. These errors are common in surveys, but if it surveys enough people and there isn’t a tragic flaw in design that causes large numbers of people to make the same error, then it doesn’t matter much for the final result.
The problem is that the authors focused on a small number of non-citizens in a very large survey that likely accidentally said they were noncitizens who voted when they were really citizens who voted. The CCES survey asked about 20,000 people how they voted and about 19,500 of them said that they were U.S. citizens. Since the CCES is about federal elections, it oversamples citizens who can vote and under sample non-citizens who can’t vote. In fact, the number of reported non-citizens in the CCES survey who said they voted in a federal election is just about exactly the number who should have misidentified themselves as non-citizens in such a large survey:
This problem arises because the survey was not designed to sample non-citizens, and the non-citizen category in the citizenship question is included for completeness and to identify those respondents who might be non-citizens. We expect that most of that group are in fact non-citizens (85 of 105), but the very low level of misclassification of citizens, who comprise 97.4 percent of the sample, means that we expect that 19 “non-citizen” respondents (16.5 percent of all reported non-citizens) are citizens who are misclassified. And, those misclassified people can readily account for the observed vote among those who reported that they are non-citizens [emphasis added].
Survey misuse, misdesign, and misinterpretation is a serious problem that we all witnessed right after the 2020 election. This strain of research appears to be another instance of that. There are likely many problems with America’s voting system and there is no doubt that a non-zero number of non-citizens illegally voted, but there is no good evidence that noncitizens voted illegally in large enough numbers to actually shift the outcome of elections or even change the number of electoral votes.