The Science of Predicting Elections

James Vines

trump clinton.png

Image Credit: Wikimedia Commons

The study of elections is known as psephology, or the study of ballots. Psephology is not a science of experimentation, but of observation. Its purpose is to predict the outcome of an election, and to use these predictions to influence the actual outcomes. When carried out correctly, psephology abides by strict rules of sample size and random selection to reliably assess an election’s outcome.  

In its most basic sense, psephology is used to inform both the media and electorate about the likely future outcome. Ideally, this would be achieved by asking every member of the electorate for their vote. However, in a country the size of America, where 218 million individuals are eligible to vote, this just isn’t feasible. Thus pollsters, the people who conduct the polls to be analysed, need to use a sample.

Pollsters need to select a scientifically representative sample. This means that if 50% of your population is female, then 50% of your sample needs to be female. Selection needs to be considered for as many different demographics (a section of a population) as possible: race, religion, age, location, income, etc.

However, directly selecting individuals who match your demographics would not be random. Randomness is very important in psephology, with pollsters utilising a branch of mathematics known as Probability Theory. The basis of Probability Theory is the idea that everyone in a population has an equal chance of being included in a sample. In practice, this means using random digit dialling machines which dial phone numbers completely randomly. Pollsters apply two mathematical laws: ‘The Law of Large Numbers’ and the ‘Central Limit Theorem’. These laws allow for generally accurate predictions of electoral results.


Image Credit: OpenClipart-Vectors/Pixabay

The Law of Large Numbers expresses the idea that as the number of trials of a random process increases, the percentage difference between the expected and actual values goes to zero. In the context of polling this means that the more totally random people we ask the more likely the results are going to follow the actual trend – rather than following a trend we have observed through chance.

An example of this would be rolling a fair 6-sided die. If the die was rolled 6 times and it rolled a 5 every time we might conclude that the die always rolls a 5. Of course, this would be wrong as the probability of getting any result on a fair die is one sixth. We got a 5 every time we rolled it through pure chance, if we rolled the die 6,000 times this kind of chance would even out. We would observe the die to roll each number around 1,000 times. From this we would conclude that the die does indeed have a probability of one sixth for rolling any result.  Thus, the more repeats we do the closer the result we observe becomes to the actual result.   

The Central Limit Theorem is similar, it states all of the samples will follow an approximate normal distribution pattern, with all variances being approximately equal to the variance of the population divided by each sample’s size. In the contexts of polling this means that in a large enough, random sample, results will fall within the limits of a normal distribution curve (or a bell curve), moreover this normal distribution still applies if scaled up from a smaller sample to a larger population – and that’s exactly what we’re trying to achieve!

Using the Law of Large Numbers and the Central Limit Theorem together is what allows pollsters to transfer their findings from a sample to the larger population. The Law of Large Numbers means that the chance of ‘rouge results’ are evened out and the Central Limit Theorem means we can apply a small sample to a bigger population and the trend will be the same.

Pollsters have to weight each of the responses to reflect the representativeness in the whole population as mentioned earlier. This is really important as not everyone who is eligible to vote in a population will vote. If older people are most a country’s electorate, their responses will be weighted higher than the responses younger people. This ensures the sample taken is representative of the actual voting population.

Despite pollster’s best efforts, polls are never perfect. Primarily, this is because a truly random sample can never be achieved –  some demographics may not own phones or be less likely to agree to take part in a poll. There is also the issue of ‘shy’ voters, who may not admit to the candidate they will really vote for, or who may not actually vote at all.

These kinds of problems can throw pollsters off if they don’t equate for them in their algorithms. One example is the final 1992 UK General Election polls, which put the Conservatives at 38% of the vote, indicating a hung parliament. The actual results of the election saw the Conservatives with 42% of the vote and a 21-seat majority. This incorrect prediction was put down to Conservative supporters not disclosing their voting intentions. Today pollsters in the UK account for the ‘shy tory’ vote by asking their sample how they previously voted, and then assuming that some would vote that way again, but at a reduced rate.

Polling is a difficult process, but it is essential to enable an electorate to assess how a political party is fairing. It is also important for the candidates themselves; they use polls to inform much of their behaviour, appearance and policies. For a candidate to win it is essential for them to satisfy the largest demographics of voters. This aim is only achieved through decision making based around how pollsters predict the largest voting demographics will respond to certain policies. Thus, it makes sense for a candidate to focus their efforts gaining the votes of the largest voting groups.  This means policy often revolves around the largest voting demographics as opposed to groups who are unlikely to vote and so these groups will often end up being overlooked.

#AmericanElection #Election2016 #JamesVines

0 views0 comments