The Problem with Opinion Polls

May 18, 2015 2:41 pm

The Problem with Opinion Polls



There is to be an independent enquiry into why polls for the 2015 UK General Election got it so wrong following the shock majority win for the Conservatives.

Reasoning and discussions in the media around why the polls were so different really resonated with our UX Practitioners here at SimpleUsability, who have been following the election closely after a review of the party websites.

Here’s our thoughts on where the polls went wrong and why this is an extremely pertinent example of the importance of research methodology, something we’re incredibly passionate about here at SimpleUsability

1. Experiment design
BBC Panorama aired a programme called ‘Who will win the election’ where Nate Silver, US data journalist, explained the dilemma of users being polled in one way and acting in another way.
“Opinion polls usually only ask which party you’re going to support. But on Election Day you’re asked to vote for a candidate not a party… a popular MP gets more votes than the polls are able to predict.”

YouGov explains a few reasons which might have contributed to the polls being so wrong, and that it’s rooted in opinion rather that a direct connection to the action.

With user research you need to decide if you want to understand choices and interactions that are set in context, or if you are interested in opinion about a service, topic or product.

This is where experiment design and understanding the data that you are going to be gathering is really important with a keen eye on the research goal. We often guide the research methodology for our clients and have to challenge preconceptions that eye tracking is just about heat maps. These can be useful when used carefully an in context where you really understand the data behind the visualisations e.g. the task that users did to create the visualisation and understanding regarding why they did what they did.

“Voting is a different exercise from answering a poll. It is a choice with consequences, not just an expression of view.”

2. Better information
We quite agree with Mr Silver and feel the same about the research that we do, and that our clients should be getting better research outputs.

“I believe the consumer is smart… and the consumer deserves you know better information.”

Rather than a standalone poll, a better understanding can often come from a combination of research. With election prediction this can be combining demographics, economic data and historical data. This is difficult in a changing political landscape where past polling results are less useful, and alternative research methods are required.

Blended research methods can check behaviour and combine data to inform different research questions. We often conduct one to one user research in combination with analytics data to understand the ‘why’ behind the journeys or requirements identified.

Private pollsters talk about asking people about policies that are important to them before asking them their party choice – which sets the context of the question. This is also important in UX research to ensure the users have the scene set in their own mind, and can use their own language and requirements when exploring the interface.

The frequency of the research is very important as users’ attitudes and the behaviour of an audience segment can change over time. There was much speculation about the timing of polls and whether they were capturing late swings, with a third of voters deciding how to vote the day before the election.

3. Geographical differences
Our electoral system of first past the post means that the national polls don’t necessarily translate down into the constituencies, making it hard to predict the outcome of seats

Geographical representation within UX research is interesting as there is a tendency to make the sample representation all about demographics and location.

But what will this allow you to find out? We prefer to profile on behaviour and experience, but go out to different locations for specific reasons which are explained in more detail in our guide to where to conduct user testing.

4. Social desirability
People could have been deliberately dishonest due to the social taboo around right wing voting – as opposed to the perceived ‘valour’ of liberal support. People may have felt uncomfortable expressing their honest allegiance to the interviewer.
People may also vote in a particular way to fit in with their social group but in private they behave differently.

People do like to project a different persona and the skill is in the recruitment and the facilitation technique to ensure that observing actual behaviour. There is a difference between what people say they do and what they actually do.

This is a huge reason for our methodology as users may not want to admit they struggled with something. Evidence is hard to argue with when based purely on observing users conducting a task. With retrospective analysis the pressure can be taken away from the user, and the journey can be broken down and played back to the user carefully focussing the users what happened.

5. Sample bias and misrepresentation
There may have been possible sample bias in the population chosen to be polled (e.g. contacting people via the landline phone – thus ostracising population without landline numbers).

The polls will use data from 100% of the population who were sampled. However, in the UK the turnout of the actual vote was around 2/3rds of the voting population – meaning the polls were affected by the opinions of non-voters.

Both of these points highlight the difference of a generic poll and a research project that is actively recruited against behaviours rather than just demographics. It’s important to ensure all audience segments are represented, or if specific segments are targeted that the results take this into consideration.

6. Influences
Publication of poll results may have in fact affected the voter’s decisions. People may use the projections to guide their decisions on the vote, thus creating a negative feedback loop.

For example, a person considering their vote may be hesitant to vote Conservatives and tempted to vote for Labour, but might see that Labour were projected strong support and decide to cast their vote for the Conservatives instead.

There may have been a situation, due to media coverage, that users will be tactically voting rather than voting for who they actually want. There may have been a fear of a ‘wasted vote’ and wanting to make a difference. Getting to the heart of the matter and understanding these concerns is really interesting, especially in a UX environment where methodologies such as user journey mapping can really help craft a solution.

We’re careful to recruit in a way where the users will not be fully aware before the research about the topic for exploration We want to observe natural behaviour, for example a new user, rather the watch someone who has done their homework and learned how to use the website and are not an expert user with coping strategies.

Here are some parting thoughts on the lessons that can be learnt from the election polling and important factors to be taken into account in shaping your own research:

  • Really analyse the research goals and consider the types of tasks and questions that you will be using.
  • Keep questions directly related to actions of the users, with a strong connection between what uses actually do to gather your insight.
  • Consider combining research methods rather than relying on one method.
  • Really question where your users are located and whether there are actual behavioural differences between locations.
  • Be aware that people will tell you one thing and do another.
  • It’s all in the planning and the recruitment – carefully designed experiments with the right people.

References:
https://yougov.co.uk/news/2015/05/11/we-got-it-wrong-why/
http://www.telegraph.co.uk/news/general-election-2015/11592840/Independent-inquiry-announced-into-what-went-wrong-with-election-polls.html
Eddie Mair BBC Radio 4 12th May 2015: http://www.bbc.co.uk/programmes/b05tl3jr
Richard Bacon and Nate Silver BBC Panorama ‘Who will win the election?’
http://www.bbc.co.uk/iplayer/episode/b05t3flh/panorama-who-will-win-the-election
http://en.wikipedia.org/wiki/Spiral_of_silence