2024 polls were accurate but still underestimated Trump
Here at 538, we think a big part of our jobs during election season is to explore and explain how much trust you should put in all the people telling you who's going to win. More than anyone else, given the amount of data they produce and the press and public's voracious appetite for it, that includes the pollsters. That's why we do things like publish ratings of pollster accuracy and transparency and make complex election forecasting models to explore what would happen if the polls are off by as much as they have been historically.
Polls are also important for the reporting we do here at 538, which is rooted in empiricism and data. Furthermore, the quality of the data we're getting about public opinion is important not just for predicting election outcomes and doing political journalism, but also for many other parts of our democratic process.
Suffice it to say, if polls are getting more or less accurate, the public needs to know. And now that the 2024 election is in the rearview mirror, we can take a rough first look at how accurate polling was.
Just one note on scope before we get started: In this article, I will be taking just a broad look at how polls did in states where the results are final or nearly final. That means we won't assess the accuracy of national polls yet, given how many votes are still left to count in California and other slow-counting states, and won't be assessing the accuracy of individual pollsters, which we'll do when we update our pollster ratings next spring.
Polls in 2024: Low error, medium bias
Despite the early narrative swirling around in the media, 2024 was a pretty good year to be a pollster. According to 538's analysis of polls conducted in competitive states* in which over 95 percent of the expected vote was counted as of Nov. 8 at 6 a.m. Eastern, the average poll conducted over the last three weeks of the campaign missed the margin of the election by just 2.94 percentage points. In the seven main swing states (minus Arizona, which is not yet at 95 percent reporting), pollsters did even better: They missed the margin by just 2.2 points.
This measure, which we call "statistical error," measures how far off the polls were in each state without regard for whether they systematically overestimated support for one candidate. And by this metric, state-level polling error in 2024 is actually the lowest it has been in at least 25 years. By comparison, state-level polls in 2016 and 2020 had an average error close to 4.7 percentage points. Even in 2012, which stands out as a good year for both polling and election forecasting, the polls missed election outcomes by 3.2 percentage points.
At this early juncture, we can only speculate as to why error was so low this year. One reason could be that pollsters have mostly moved away from conducting polls using random-digit dialing — a type of polling that has recently tended to generate results that oscillate more wildly from poll to poll than other methods. One notable pollster that does still use RDD is Selzer & Co., which had Vice President Kamala Harris leading President-elect Donald Trump by 3 points in its final poll of Iowa this year. Trump ended up winning the state by about 13 points, making for a 16-point error. It looks possible that Selzer's poll had too many Democrats and college-educated voters in it, factors the firm generally does not attempt to correct for due to Selzer's philosophy of "keeping [her] dirty hands off the data" (to be fair, this approach had worked excellently until this year; Selzer is one of the top-rated pollsters in 538's pollster ratings).
Quinnipiac University, which also uses RDD, also generated polls that didn't seem consistent across states, though they ended up being closer to the outcome than Selzer. Meanwhile, other prominent pollsters that previously used RDD have now stopped using the method. That includes ABC News, which, after publishing an RDD poll that found now-President Joe Biden ahead of Trump by 17 points in Wisconsin in 2020 (something the pollsters behind the survey rightly identified as an outlier result when it was published), now sources its polls from Ipsos, which conducts polls online among respondents who are randomly recruited by mail and telephone.
Another factor is that pollsters are increasingly balancing their samples on both demographic and political variables, such as individuals' recalled vote in the last election. While this can cause some strange results, it generally stabilizes the polls and produces fewer outliers than one would expect by random chance alone.
Based on our preliminary findings, pollsters that used this aggressive approach to modeling had lower error than others. While this is just a loose proxy as we conduct a more thorough analysis, we found that pollsters who conducted their surveys with online probability panels, interviewed people with robo-calls, or included text messages or phone calls as part of a bigger mixed-mode sampling design tended to use more complex weighting schemes (and were especially reliant on recalled vote) — and also had lower error than pollsters using more hands-off modes:
But the news is not all good. While polls had a historically good year in terms of error, they had a medium-to-bad one in terms of statistical bias, which measures whether polls are missing the outcome in the same direction. By our math, state polls overestimated support for Harris by an average of 2.7 points on margin in competitive states.
That's lower than the statistical bias of the polls in 2016 and 2020, which underestimated Trump by 3.2 and 4.1 points, respectively. But it's higher than the bias in the 2000, 2004, 2008 and 2012 elections.
This is not great news for pollsters. It means they did not fully solve their problems from 2016 and 2020 of getting enough Trump supporters to take their polls . While those problems may have been abated by pollsters weighting their data more aggressively or improving their sampling designs, they are still obviously present. You can really see this if you look at the pattern of polling bias in the competitive states from 2016 to 2024:
While pollsters managed to reduce their bias in some states, especially Wisconsin, from 2020 to 2024, the pattern in the industry is still the same: Pollsters are having a hard time reaching the types of people who support Trump.
You should expect errors in polling
While bias in the polling industry is troubling, it is not necessarily unexpected — especially after the last few elections. And it's worth repeating that a 3-point error on the margin is indeed very small historically. Political pollsters have designed a tool that, on average, can measure public opinion among hundreds of millions of people to within 1.5 percent of its "true" value (converting vote margin to vote share). When you think about it that way, it's actually remarkable that polls are as accurate as they are.
After the 2020 election, a year in which America's pollsters faced their worst performance since 1980, the American Association of Public Opinion Research (the professional society for pollsters and survey researchers) issued a warning to people trying to predict election outcomes in 2022 and 2024. "Polls are often misinterpreted as precise predictions," it said. "It is important in pre-election polling to emphasize the uncertainty by contextualizing poll results relative to their precision.... Most pre-election polls lack the precision necessary to predict the outcome of semi-close contests."
In other words, polls are simply not up to the task of dispositively determining the result of a close race before it happens. In such a case, the margin between candidates would be too small for observers to conclude that one candidate was reliably ahead, given the inherent uncertainty in polling.
Let's put this note of caution in the context of the 2024 election. At first glance, it may seem like polls had a bad year because they pointed to a close election and Trump looks like he will cruise to a 312-226 win in the Electoral College. But as I wrote last week, because he led in the Sun Belt swing states and was tied in Pennsylvania, polls didn't really even need to underestimate Trump at all for him to win the election. And, I warned, if they underestimated him by 2 points — which would be small compared to other misses historically — he could sweep all seven swing states.
Well, it looks like that is exactly what happened. In fact, it was the modal outcome in our final forecast. AAPOR's warning is more relevant today than ever.