How to read political polls in 2024
It's the fall of an election year, which means you're probably seeing a lot of news stories about polls. But they can be confusing for the uninitiated: Which pollsters can you trust? How can two polls say two different things? And can you even trust polls, anyway?
Here at 538, we cover polls year-round, so we'd like to share with you some tips we've learned over the years on how to consume polls responsibly. Anytime you see a poll out in the wild, we recommend following these 10 steps to help put it into context and understand what it means for the election.*
1. Check who conducted the poll. Some pollsters have, over time, developed reputations for accuracy; others have poorer (or nonexistent) track records. You can check which is which by looking them up on 538's pollster ratings page. We rate most pollsters on a scale from 0.5 to 3.0 stars based on how accurate they've been in past elections and how transparent they are about how they conduct their polls. If a pollster has a 3.0 rating, you should take it pretty seriously; if it has a 0.5 rating, it's not very reliable.
If a pollster isn't on our pollster ratings page, it could be for a few reasons, such as if it hasn't conducted enough polls in the past for us to rate it or it's brand new on the scene. In this case, we can't be certain how reliable the pollster is because it has little to no track record for us to judge it on. We recommend being cautious with these polls.**
2. Check who sponsored the poll. On 538's polls page, we note not only the polling firm that conducted the poll, but also (if applicable) the sponsor who paid for it. Oftentimes, these will be partisan groups that are rooting for a specific candidate to win — or even campaigns themselves. These so-called "internal polls" are indicated on our polls page by colored diamonds.
In the past, we've found that internal polls tend to be about 4 or 5 percentage points too good for the candidate their sponsor is rooting for. You don't need to ignore these polls completely, but it's good to be aware that they're probably overestimating one side.
You might also be reading news coverage of the election and encounter a sentence like, "Republicans who have seen polling of the race say the election is tied." These types of polls you probably should ignore completely. At 538, we require certain basic information about a poll — such as the dates it was conducted and the name of the pollster — before we put it on our page, and without this information, the poll tells us basically nothing. For instance, what if the poll referenced is actually three months old? What if it had a teeny sample size? Or what if the pollster is someone disreputable? Remember, these partisan groups have an agenda, and if they're withholding information about a poll, they might be trying to hide something.
Finally, try not to rely on polling data from partisan media. They may selectively cover polls that are good for their candidate while ignoring others — leading to a nasty shock for their viewers and readers on election night. 538's polls page includes all polls, regardless of their partisanship, so you can get the fullest possible picture of the race.
3. Pay attention to who was polled. Some polls survey all adults; others survey just registered voters; still others survey just voters the pollster deems likely to vote in an upcoming election. If you're interested in who's going to win the election, you want a likely-voter poll. A registered-voter poll isn't ideal because we know not every registered voter will vote, although early in an election cycle, registered-voter polls are often all we have because it's too early to know who is likely to turn out.
For similar reasons, polls of all adults are definitely not good for assessing who will win an election — but they are good for answering questions like, "What do Americans think about X issue?" (Voters aren't the only people whose opinions matter!)
Occasionally, you'll also see polls of even more specific groups, such as older voters or Latinos. While these polls are certainly useful for better understanding these demographics, they are obviously not representative of the country as a whole.
4. Don't forget about the margin of error! Polling isn't an exact science — and in acknowledgment of that fact, reputable polls will always include a margin of error. This reflects the fact that, unless a pollster talks to literally every American or every person who is going to vote in an election, there is a chance that the people they do talk to aren't representative of the views of the whole population.
For example, if a poll says that 76 percent of Americans like puppies and has a margin of error of ±4 points, that means that the true percentage of Americans who like puppies could be as low as 72 percent or as high as 80 percent. (Of course, the actual number should be 100 percent, but I digress.)
Importantly, in election polls, that margin of error applies to each candidate's vote share. So if a poll with a margin of error of ±4 points shows former President Donald Trump at 50 percent and Vice President Kamala Harris at 45 percent, Trump's true level of support could be as low as 46 or as high as 54 percent, and Harris's could be anywhere between 41 and 49 percent. In other words, Trump could be leading by as much as 13 points, or Harris could be leading by as much as 3. It's still more likely that Trump is ahead in that case, but his lead is within the margin of error. This is why any poll showing a candidate "leading" by only 1 or 2 points is best thought of as showing a tied race — because 1 or 2 points is almost always within the margin of error.
5. Check how the poll was written. The exact question wording of a poll can be very important. A poll asking if voters support "Trump's border wall" will probably turn up different results than a poll asking if voters support "stronger border security." Campaigns will also sometimes release polls showing what respondents said after they were read a positive statement about a candidate and a negative statement about their opponent; these are obviously misleading and biased. (Another reason to be suspicious of internal polls that don't share all the relevant information!)
Even the order in which questions are asked can matter. If a poll starts with 10 questions about abortion and then concludes by asking respondents which candidate they prefer, they may be "primed" to pick the candidate who is better on abortion because they have reproductive rights on their minds.
6. Compare polls only if they were conducted by the same pollster. Different pollsters use different methodologies, different question wordings, etc. As a result, polls from one pollster aren't directly comparable to polls from another. For an example of why this is important, imagine if Quinnipiac University put out a poll showing Harris up 1 point one week after YouGov put out a poll showing Trump up 2 points. That wouldn't necessarily mean the race is moving in Harris's direction. If Quinnipiac's previous poll had shown Harris up 3 points, it might mean the race is moving in Trump's direction.
It is OK, however, to average polls together. In fact, we encourage it!
7. Don't pay attention to outliers. Instead, look at the average of the polls. Outlier polls — those that say something very different from most other polls — tend to get a disproportionate amount of attention because they're so surprising. But that's a really bad habit; if a poll disagrees with every other poll, it's probably because that poll is off the mark.
That said, outliers aren't necessarily wrong (especially when they come after an event that plausibly could have caused a big shift in the race, such as a debate). But have patience: If they are signaling a shift, you'll find out soon enough when more polls get released confirming that the first one wasn't a fluke.
More broadly, don't put too much stock in any single poll, even one from a highly rated pollster. Individual polls can miss the mark for any number of reasons, including a flawed methodology or just a bad sample. Instead, an average of all the polls is more reliable. At 538, we publish rolling averages for a variety of elections and questions, such as the 2024 presidential race nationally and in swing states, the president's approval rating, the favorability of various politicians and more. If a poll comes out with a surprising finding, it can be helpful to check it against the averages to see where it fits in.
8. Polls are generally accurate — but not perfect. In 2022, polls conducted in the final three weeks before Election Day missed the final margin of Senate, House and governor elections by a weighted*** average of 4.8 points. That was the lowest error since at least 1998, showing that, despite the polling industry facing many challenges such as people not picking up their phones and decreasing trust in the institutions that conduct them, polls are still good at getting pretty close to election results. But of course, a weighted average error of 4.8 points is not 0. It's unrealistic to expect polls to nail the result of every race (see "don't forget about the margin of error" above).
Since 1998, polls of presidential general elections have had a weighted average error of 4.3 points; of Senate general elections, 5.4 points; of House general elections, 6.1 points. When a candidate leads in the polls by a smaller margin than that, be aware that it would be totally normal by historical standards for the trailing candidate to win.
9. Don't try to "unskew" the polls. Although individual polls can have their problems, it's unwise to try to outguess them — e.g., by saying, "well, this poll doesn't have enough Republicans, so it must be overstating the Democrat's margin." Any pollster worth their salt has already tried their hardest to make their poll as representative as possible, which they do by "weighting" their sample to match the target population's gender, race, age, educational, etc., breakdown. Very simply: If a poll is trying to measure the views of a population that is 50 percent women and 50 percent men, but it samples a population that is 40 percent women and 60 percent men, the pollster would weight the women's responses a little higher and the men's a little lower such that the views of women and men are represented equally in the final results.
The same goes for trying to find fault with a poll by identifying an unrealistic result in its crosstabs. (Crosstabs are the detailed breakdown of a poll's results by demographic subgroups, such as race, age or political leaning.) Subgroups have smaller samples than the poll as a whole, and, accordingly, their margins of error are larger. Even if a poll has an unlikely finding at the subgroup level, that doesn't mean its overall topline finding will be wrong.
Basically, as soon as you start to subjectively judge polls in this way, you're unmooring yourself from any kind of consistent, rigorous analysis of them. It becomes very hard to avoid your personal biases creeping in — for instance, can you be sure you're "unskewing" bad polls that are good for your candidate with as much zeal as bad polls that are bad for your candidate? Indeed, history is littered with examples of people claiming the polls were systematically biased against one side who woke up very surprised on the day after the election.
In reality, although polls of a given election do often end up being too good for one party or the other (see "polls are generally accurate — but not perfect" above), it's virtually impossible to guess in advance which direction that bias will run in. Historically, polling bias has jumped around at random from cycle to cycle.
10. Polls are snapshots, not predictions. Even if a poll is perfectly accurate at the time it is conducted, that doesn't mean it will match the final result on Election Day. Public opinion can always change, and campaigns are always subject to twists and turns. In general, the further away from Election Day a poll is conducted, the less accurate it is. And you might want to be extra skeptical of polls conducted at a time when one candidate was getting a lot of good press — such as in the wake of Trump's assassination attempt or Harris's coronation as the Democratic nominee.
For this reason, 538's polling averages are a really good tool for learning where an election stands right now (and where it has stood in days and weeks past). But if you are interested in a prediction of who will win, you're better served looking at our forecast.
Footnotes
*Credit to former FiveThirtyEight senior political writer Harry Enten and the American Association for Public Opinion Research for inspiring this list.
**Rarely, polls from an unknown pollster might even be fake. If you suspect that a "pollster" is simply making up data, feel free to email us at polls@fivethirtyeight.com and we'll look into it (if we haven't already).
***Polls from particularly prolific pollsters are downweighted to avoid giving them too much influence over the average.