Skip Navigation
MarylandToday

Produced by the Office of Marketing and Communications

Subscribe Now
Research

Poll (Dis)Position

Faculty Polling Expert Explains What Went Wrong in Predicting the Election … Again

By Chris Carroll

Man watching election returns in Times Square with flag lit up behind him

Photo by AP Photo/Seth Wenig

A man stops to watch election returns on electronic billboards in Times Square on Election Night 2020, when pollsters repeated their 2016 performance with predictions that were far off the mark.

While the outcome of the 2020 presidential race remained unsettled Thursday night, just about everyone agreed one thing was certain: Polling led us astray for the second election in a row.

Not only did a range of pollsters show former vice president Joe Biden with a double-digit nationwide lead over President Donald Trump and comfortably ahead in battleground states like Florida and Ohio—both of which Trump won by solid margins—they seemed to indicate a “blue wave” would flip the Senate to Democrats and give the party the majority in statehouses nationwide.

 Make that more of a blue ripple, at best.

After promising to take to heart the lessons of 2016, when Trump’s upset of Hillary Clinton took much of the nation by surprise, how did pollsters get it just as wrong this time around? Maryland Today spoke to government and politics Professor Shibley Telhami, Anwar Sadat Professor for Peace and Development and a researcher with 30 years of polling experience focused on both domestic and foreign policy issues.

As director of UMD’s Critical Issues Poll, Telhami specializes in asking finer-grained questions than would a rushed political pollster who catches people on the phone around dinnertime before an election. To understand public attitudes, he said, one has to understand what lies behind them—and that requires going well beyond the obvious questions.

Why did national and state polls show big leads for Biden and other Democratic candidates when these races were actually nail-biters?
No question, there was a failure, and it’s going to prompt a reassessment like after the 2016 election. It’s important to point out that, two days after the election, the verdict does not look nearly as problematic as it looked late on election night, when the results reflected same-day voters, and people should have expected that these results would favor Trump, as most of Biden’s supporters voted early. As the mail-in ballots were totaled, the presidential electoral picture wasn’t far off from the one that pollsters drew. What we see now is that the states that Biden was predicted to win—he’ll end up winning most of them. Where I think the polls erred more was in measuring the extent of support for each candidate. For example, a very respected poll, the ABC-Washington Post Poll, had Biden up by 17 points in Wisconsin just a few days before the election, and in fact he was declared a winner by 20,000 votes. Those gaps require probing and explanation.

What caused this overestimation of support for Democrats?
As in 2016, the first and biggest failure is state polling, although in 2020, the predicted red/blue color of the states was better-predicted than in 2016. One of the things we knew was taking place in 2016 was that the polls were not capturing enough people who tend to support Donald Trump, particularly whites without college degrees in the rural areas, who were one of the hardest groups to get to respond to polls. Pollsters adjusted their models by adding a variable for education to account for those without college degrees, but still, it has been particularly hard to get people in rural areas to respond to polls. Pollsters compensate for that by weighting the data. For example, say that you expect, based on census data, that the number of people in rural areas should be 100 in your sample. But if you only get 30 to answer, you would give this group more weight in you sample. That’s certainly helpful, but this may not be perfectly representative of the larger group.

Overall, there have been challenges facing polling over the past couple of decades, after the industry first focused on landlines, then cell phones, then started moving into online polling. Many pollsters still prefer phone polling, but it has become far more difficult to get people to respond, especially young people who use phones less. We have gone from an over 30% rate of response to a 6% rate, and much lower than that among certain segments. In academia, we have had a lot of success using online polls with "probabilistic" panels that are selected to be representative of the U.S. population. These polls give us a greater opportunity to ask more in-depth questions that allow the respondents time to read the questions and carefully consider them before responding.  

Are these problems Trump-specific? Is there a special challenge polling his supporters?
You hear about the so-called “shy Trump supporter” who won’t tell you they support him, but later go vote for Trump, but I don’t really buy it. Research finds little evidence to support this thesis. Also, I think if that effect applies to Trump, it can apply to non-Trump candidates—like someone who supports Democrats living in a red area. You see people out there championing Trump, wearing MAGA hats—he’s the president. Perhaps this was a factor in 2016, but I doubted then and doubt it even more now.

Where I do think Trump could affect polling is in the president’s own discourse. We have never seen the kind of assault on the media, on truth, on science, and on polling—he attacks public opinion polls as “fake polls”—so we don’t know if that has had a real impact on how his supporters view polling, and if they interact with polls in a way that might distort the results. This is something else we need to probe.

Can presidential polls ever produce an accurate result?
I think it’s not enough to ask whether people are supporting this candidate or that one. In an article my colleague (government and politics Associate Professor) Stella Rouse and I wrote for Reuters a week before the 2016 election, we said just like everyone else it looked like Clinton was leading by about 10%. But our article was about why we thought Trump nonetheless had real a chance despite the seeming lead. That’s because when we asked questions about how people feel about each candidate in relation to questions like “Do you want to see revolutionary change to our system?” or “Is the system rigged?,” many people agreed, including some Democrats and independents. And most of those who agreed said they saw Trump as an agent of change more than Clinton. The point is one needs to really probe to analyze where the public is, not just ask the bottom-line questions.

Should people keep paying attention to political polling?
There's no alternative. Guessing doesn't work. So you are going to have to just refine the polls and make them better. Most of the time, they end up being pretty good, within the margin of error. But I think the worst thing is to assume that they're going to be perfectly accurate. They never are. Don't look at them as the final word—look at them as a guide. Spend the time to look beyond the headlines to understand the prisms through which people look when they render opinions.

 

Topics:

Research

Maryland Today is produced by the Office of Marketing and Communications for the University of Maryland community on weekdays during the academic year, except for university holidays.