The 2020 presidential election defied forecasters’ expectations in many ways. Though Joe Biden defeated Donald Trump, the outcome was closer than anticipated in key states—with large swings toward Trump among Latino voters in Texas and Florida.
Biden did manage to win back the three “blue wall” states of Wisconsin, Michigan and Pennsylvania, and flipped Arizona and Georgia—two states long held by Republicans. But those victories were also narrow, and Democrats underperformed polls in down ballot races.
“I think it’s fair to say that public polling as we know it seems to be broken,” said Nicholas Marchio, a lead data scientist at the University of Chicago’s Mansueto Institute for Urban Innovation, a hub for urban science.
Marchio previously applied modeling techniques to target voters for Civis Analytics and Sen. Bernie Sanders’ presidential campaign. Before the election, he explained how good polling works and why it’s useful, while cautioning that data is not “all-powerful.”
In the following Q&A, Marchio reconciles the election’s outcome with pre-election forecasts, and offers suggestions for how polling could be improved before the next election.
Were you surprised by the election’s close outcome?
Yes. Biden’s victory was much narrower than anticipated by most forecasters.
The Economist, for example, has very rigorous standards for which polls they include in their forecasts—their forecasting is really among the best of the best—but they had Biden at +8 in Wisconsin and +6 in Pennsylvania. In both states, the margins turned out to be in the tens of thousands of votes—within a percentage point or so. That means the race could have easily swung in a different direction.
As in 2016, this means we have to be somewhat skeptical of polling data and forecasting models going forward. Recently, models have tended to overestimate Democratic chances in a pretty significant way. Some statisticians might say, “It’s the data’s fault; the pollsters didn’t do a good job collecting it.” We’ll get to the polls—but ultimately, responsibility has to fall somewhere.
What are some ways forecasting could be improved?
Some might argue that we should change the way we think about forecasting elections by incorporating more kinds of data into models.
Most models today only incorporate voters’ intentions as gleaned from polls, historical state election data and economic trends. The assumption is that these sources capture all the information we need to forecast the state of the race. But many additional kinds of available data also predict what happens on Election Day: voter enthusiasm measures, earned media tracking, Google search trends, fundraising, uncertainty from court orders or state voting legislation, ad spending and—this year—COVID-19.
None of these data sources have been sufficiently factored into forecasting models, because it’s challenging to do so and would increase noise and uncertainty. But ultimately, uncertainty may be better than having high confidence about an outcome based on a model that’s actually biased.
What changes to polling were made after 2016, and how could polling be further improved?
After 2016, pollsters started weighting results by education, which correlates heavily with trust: People who are more educated tend to have more social trust and more faith in institutions. They also tend to vote Democratic. But while it seemed like weighting for education would indirectly get at trust—thereby solving the problem—trust is also what drives people to pick up the phone and answer questions from a stranger about their political beliefs in the first place, regardless of education. So, trust remains a tricky variable.
We know now that there’s something systematic going on in these panels that overestimates support for Democrats. Basically, people who respond to surveys have different views than the people who show up and vote. A potential explanation for this has to do with diverging perspectives about polling itself between Republicans and Democrats.
Trump may have politicized the act of taking a survey by telling his supporters that polls showing him behind were “fake news.” On the flip side, Democrats have become more politically engaged since Trump was elected. That level of civic engagement could have made Democrats more likely to want to participate in polling. New methodologies will have to find a way to address this potential gap in nonresponse behavior.
Why might state polls have been less accurate in some states than others? Georgia was projected to be close, for example, whereas Wisconsin was not. Why?
One reason is that the Rust Belt—which includes Wisconsin, Michigan and Pennsylvania—is notoriously difficult to survey, in large part because the population contains a disproportionate number of non-college educated white voters. It’s a group that famously swung from Obama to Trump in a way that most forecasters didn’t anticipate. Trump has a strong base of support in this demographic, but the group is harder to survey for a variety of reasons, including systematic non-response bias and lower social trust.
But pollsters and modelers also don’t talk enough about the way polls are conducted and how it might bias results toward particular groups. Since 2016, a lot of surveys have been conducted via incentivized web panels. With this methodology, people are paid to take a survey online, but the incentive is usually quite low—maybe $1 for a 20-minute questionnaire. People who take surveys in this manner tend to be more online and may be less likely to work in sectors like manufacturing. That can bias the results against populations that tend to support Trump. So, a possible solution might involve recruiting these populations with higher incentives, reaching out via other modes like phone or text, or including industry-specific weighting features.
What do you make of swings toward Trump among Latino voters?
An important caveat is that we need to be careful when analyzing voting patterns within such a broad demographic category. The Latino community is incredibly diverse—Cuban immigrants in Florida, for example, may have supported Trump in part because they associated Democrats with socialism, but that issue may not be as salient for other groups.
Among Mexican and Central American communities in South Texas, the reasons seem to have been quite different. Those communities also swung toward Trump compared to 2016, but those shifts may be tied to the fact that employment in those areas is heavily concentrated in oil and gas and law enforcement. For instance, in Zapata, Texas—which is on the U.S.-Mexico border—oil and gas employment is 26 times higher than the national average. In Starr County, it’s 10 times higher. It’s not a stretch to think that Trump would support these industries to a greater extent than Biden.
On top of that, Maverick County, Texas, which is another border community, has five times higher employment in law enforcement. For me, this is a plausible explanation of why those communities went to Trump, and it’s an important reminder that we shouldn’t always fixate on race and ethnicity: Analyses need to be community-specific, and sources of employment can be very important.
Will you pay attention to polls and forecasts in 2024?
Yes—we shouldn’t automatically disregard data that’s potentially useful going forward. But I think it’s fair to say that public polling as we know it seems to be broken. We’ll need to be more discerning about which polling information we look at, and the polling industry will need to come up with better data collection techniques to address systematic nonresponse bias.
It’s not especially useful when dozens of outlets have their own polling shops that release continuously updated results, almost to the point that it becomes infotainment. That’s not sustainable if you want to do good survey research. Instead, I think we need coordination and cooperation across different outlets, investment in polling operations with dedicated staff, bigger sample sizes, more sophisticated weighting features and better survey incentives. Traditional methods aren’t going to cut it anymore.