Voting is a timely topic. Britain just voted out of the EU, and the US is electing a new president in a few weeks. Next year, there are elections in France and Germany. Perhaps even Scotland will be voting, again, on which union to leave. That’s a great scope for predictions, providing experts and non-experts alike with material for polls, forecasts, and dinner table discussions. But how accurate can such predictions be? Turns out, all we might need for an accurate election forecast is to ask enough people what they think.

I don’t care who you vote for

In 1907, Sir Francis Galton went to a village fair. One of the entertainments provided was a guessing game, where visitors tried to estimate the weight of an ox. It wasn’t an easy task, so, unsurprisingly, most visitors’ estimates were far off the correct weight (543kg, to be precise). However, Galton noted an interesting result when he averaged all responses – the collective guess was nearly the accurate weight.

Over the years, it has been shown that averaging the predictions of many people significantly improves the forecasts about future or vague events, or unknown quantities – the effect now known as wisdom of crowds, from James Surowiecki’s 2004 book. Why is that? While a single individual might be anywhere between entirely wrong and entirely correct in their predictions, usually everyone will have a slightly different piece of information, a slightly different insight, and a slightly different focus (Tetlock & Gardner, 2015). Put together, the average will be near perfect.

So how does this play into forecasting election results? Typical polls rely on intention-based questions – simply asking who or what people intend to vote for. In fact, such polls can be inaccurate. They don’t have a way of controlling for a number of fluid factors, such as respondents’ momentary political preferences or moods, and ignore the power of mere recognition, explicit versus implicit attitudes, and their importance in undecided voters. In contrast, asking a sample of people what they think the result would be has been shown to allow for more accurate forecasts in many domains, including elections (Gassmeier & Marewski, 2011; Sjoberg, 2009).

Do you recognise this man?

Another take on why such judgments tend to converge is collective recognition. More than just recognising a candidate from his or her looks, collective recognition is an aggregate measure of how familiar people are with them, even if just instinctively, for example by name. A sense of recognition, as simple as it is, has been found to affect a range of judgments and decisions, from product preference to which foreign cities we think are bigger than others. Because we recognise something (a product from ads or a city from recurring in films over and over again), we think they must have value on a certain criterion (goodness of product, size of city). As such, a politician with great media exposure might be recognised more readily. From there, people often reason backwards and infer that the candidate must be a top one.

When crowds vote

Ultimately, the same crowds go voting and bring their judgments with them. Does it matter? In terms of predictive ability, yes. As in many areas of life, voters use heuristics – mental shortcuts – to aid their decisions. As such, who they vote for is more likely to be a factor of recognition, likeability, and general perceived ‘electability’, such as age, demeanour, voice, and facial features, something little accounted for by standard polls. Wisdom of those crowds and collective recognition might actually have more predictive power than expert forecasts because they rely on something we’re all guilty of, whether we want to or not: mental simplification.

For the interested:
Gaissmaier, W., & Marewski, J.N. (2011). Forecasting elections with mere recognition from small, lousy samples: A comparison of collective recognition, wisdom of crowds, and representative polls. Judgment and Decision Making, 6(1), 73-88.
Sjoberg, L. (2009). Are all crowds equally wise? A comparison of political election forecasts by experts and the public. Journal of Forecasting, 28, 1-18.
Tetlock, P. & Gardner, D. (2015). Superforecasting. The art and science of prediction. London: Random House.


For more on this speak with us, or have a look at our capabilities

Also, as co-founders and supporters of the London Behavioural Economics Network, join the Meetup group and Facebook group for more details and events