In 2016, almost the whole world was shocked when Donald Trump won the election as the polls were declaring Hilary Clinton as the candidate that was most likely to win. Pollsters faced a public backlash after this huge fail and promised to find and solve the errors in their methods. However, they did miss the mark again during the election of 2020. This year the polls expected strong leads in certain states for Biden, but in some of these states the difference between the two parties was very small. Thereby, the polls also predicted that Biden would win Florida. Instead, Trump won the state with a pretty big lead (NBC News, 2020). How could it happen again that the polls were making incorrect predictions?
Let’s have a closer look at what went wrong this year with the polls.
Three explanations
Pollsters don’t have a clear explanation yet for what went wrong, and it is likely that it will take some time before we know what exactly happened. However, there are already many possible explanations for the missteps. Some of these are based on the mistakes that were also made in 2016 and others are based on inside information from experts and scientists. In the upcoming alineas, three explanations will be given of what went wrong this year with the polls.
Small sample size of minority groups
The first explanation is the minority groups being not well represented in the poll samples. According to Minority Rights (2020) 38,1% of the population living in the United States are part of minority groups. These minority groups include for example the Hispanic Americans and African American.
Camille Burge, Professor in Political Science at Villanova University, argues that the sample sizes of racial or ethnic minorities in the polls of 2020 are small. This also happened in 2016 (PRC, 2016). It could be explained by the fact that these people are harder to reach to make them fill in the surveys. Thereby, the individuals who are part of these minorities that do respond to the surveys are individuals who have a high educational attainment. Leading to a sample size that is not only incomplete but also biased. Which makes the sample not a good representation of the population.
Combining this information with the theory of Wicherts (2017) you could conclude that sampling is part of the design phase. In his paper he argues that you should not exclude participants from your research in the design phase. However, in the case of the sampling for the election polls, that did happen. Not being explicit about your data and excluding specific groups, leads to an incomplete and non-representable sample. Making the sample and therefore the polls, misleading data.
Donald Trump effect
The second explanation is the so-called: ‘Donald Trump effect’. Since Donald Trump became president, he has systematically attacked the media and poll makers claiming they were only spreading fake and incorrect news. The incessant attacks led to his supporters believing Trump’s claims that the media was fake. Therefore, pollsters noticed that many Trump supporters did not want to reply to surveys (AAPOR, 2020). Simply because they are convinced that they should not trust the media and poll makers. Following Patrick Murray from the Monmouth University Polling Institute, are those Trump supporters called: Shy Trumpers. It is not that they are in fact shy, but they are simply less likely to respond to the pollsters and their surveys. This as a whole is called by Murray the ‘Donald Trump Effect’.
This ‘Donald Trump effect’ can be seen as a form of confirmation bias. Confirmation bias is described by Chamber (2017) as that: “We seek out and favor evidence that agrees with our existing beliefs, while at the same time ignoring or devaluing evidence that doesn’t”. The Shy Trumpers confirm their actions (not responding) with their beliefs (pollsters are fake). They don’t critically review if their actions are indeed right, because they blindly believe their leader (Kouzes & Posner, 1993).
Thereby you could also conclude that because these ‘Shy Trumpers’ did not respond as much as others to the survey. That this specific group was also not well represented in the poll samples. Making the samples again non-representable and therefore misleading.
Polling firms
The third explanation is about the polling firms who executed the polls this year. Polling firms can help politicians gain insights into the opinions of a population. Politicians like to spend a big part of their campaign money on poll makers. However, the firms have been criticised for making profit and therefore maybe being biased (NBC,2020).
An example of confirmation bias is the selection task theory by Wason (1968). In his paper he argues that we often tend to look for confirmation instead of falsification. The paper by Umphress, Bingham and Mitchell (2010) also indicate that when companies get rewarded by a client, they could feel a sense of reciprocity. This sense of reciprocity could lead to a company not feeling the urge to critically review their own poll and surveys after it has come to a positive result for the client. Concluding that the polling firms could also be biased themselves.
Conclusion
In the previous paragraphs three explanations are given to answer the question why the polls missed the mark during the 2020 elections. I would argue that the polls mainly made some errors during their sampling. Concluding that it is not the sample size that makes a poll representative, but the sampling that makes a poll representative. If sampling is not done sufficiently and specific groups are not included in the sample, the sample becomes incomplete and not representative. Together, this makes a poll a form of misleading data that is presented and broadcast via the newspapers and news stations. Leading to many people receiving wrong and misleading data, on which they base their opinion.
After reading this blog, how do you feel about the polls? Do you think they are trustworthy? Let me know in the comments!
References
American Association for Public opinion Research. (n.d.). 2020 Pre-Election Polls: Performance of the Polls in the Democratic Primaries – AAPOR. Retrieved November 11, 2020, from https://www.aapor.org/Education-Resources/Reports/2020-Pre-Election-Polls-Performance-of-the-Polls-i.aspx
Chambers, C. (2017). The 7 deadly sins of psychology. A manifesto for reforming the culture of scientific practice (Chapter 1: The sin of bias). Princeton, NJ: Princeton University Press.
Feiner, L. & CNBC. (2020, November 7). Pollsters face another reckoning this year, but the reasons could differ from 2016. Retrieved November 12, 2020, from https://www.cnbc.com/2020/11/07/election-pollsters-2020-reckoning.html#close
Kouzes, J. M., & Posner, B. Z. (1993). Credibility (Vol. 11). San Francisco: Jossey-Bass.
Linge, M. K., Lewak, D., & New York Post. (2020, November 10). Why election polls were so wrong again in 2020. Retrieved November 10, 2020, from https://nypost.com/article/the-real-reason-election-polls-were-so-wrong-again-in-2020/
McBride, K. (2020, November 4). What went wrong with the 2020 election polls and what’s next for political polling? Retrieved November 10, 2020, from https://www.poynter.org/reporting-editing/2020/what-went-wrong-with-the-2020-election-polls-and-whats-next-for-political-polling/
Mercer, A., Deane, C., McGeeney, K., & Pew Research Center. (2016, November 9). Why 2016 election polls missed their mark. Retrieved November 12, 2020, from https://www.pewresearch.org/fact-tank/2016/11/09/why-2016-election-polls-missed-their-mark/
NBC News. (2020, November 12). Live 2020 election polls: Who is leading the presidential race? Retrieved November 12, 2020, from https://www.nbcnews.com/politics/2020-elections/presidential-polls
Umphress, E. E., Bingham, J. B., & Mitchell, M. S. (2010). Unethical behavior in the name of the company: the moderating effect of organizational identification and positive reciprocity beliefs on unethical pro-organizational behavior. Journal of applied psychology, 95(4), 769.
Wason, P.C.(1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology 20, 273–281.
Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in psychology, 7, 1832.