|

What can media research learn from the Scottish referendum?

What can media research learn from the Scottish referendum?

As Research the Media‘s Richard Marks surveys the fallout from the Scottish referendum, he notes that political polling has important lessons that apply to the challenges of measuring media behaviour.

As the dust settles on the Scottish Independence campaign, it becomes clear that the defining moment of the campaign occurred on Sunday 7 September, when a poll for the first time indicated that the ‘Yes’ campaign was ahead. This sent shockwaves through Westminster, kicked the whole campaign up a level and led to a nail-biting final few weeks of the campaign. It was the very definition of the Heisenberg principle: that it is impossible to observe something without influencing it.

What opinion polls and media research share in common, almost uniquely amongst wider market research, is the public profile that our figures achieve. The polls directly influence public opinion and government policy, whilst media currency ratings determine the destination of billions of pounds of investment and advertising.

Whilst opinion polling has always been a parallel science to media research – primarily serving as a source for media content – we can learn a lot from the polling activity over the last few weeks. Many of the issues that were at play in accounting for the volatility and variability of the polls apply equally to the media research data that we use to trade, make decisions and evaluate strategies. Many of the challenges we face in our use of, and the trust we place in, media data.

Those close to the currencies understand the catastrophic effect that an innocent, well intentioned tweak or ‘improvement’ in methodology can have on the figures.”

Firstly the polls showed the importance of how the data is collected. Differences in results can occur between online self-completion, telephone and face-to-face interviewing. These approaches can have a significant effect on the type of sample gathered (regardless of how it is weighted) and the way in which questions are communicated and answered.

This certainly accounted for some of the variability between the Scottish polls in the last few weeks. It also helps to explain why the JIC industry bodies sometimes seem to move at a speed that can infuriate non-researchers – see for example the recent frustration of the NPA with the NRS’s perceived tardiness in measuring mobile devices.

Those close to the currencies understand the catastrophic effect that an innocent, well intentioned tweak or ‘improvement’ in methodology can have on the figures. It’s no surprise that Rajar is proceeding very carefully in terms of evaluating its tests of mobile diaries, for example. It will remember, for example, the false start when sticker diaries were first introduced in the 1990s, the figures from which were initially rejected by the BBC.

From a statistical standpoint, Yes/No decisions can also be a nightmare to measure because, as Ben Page of MORI reminded us on Twitter last week, results in the 50% range are always less accurate than under 30%. OK for a three party system; more of a challenge with a binary referendum.

Human nature and psychology also play a vital role. Opinion pollsters know only too well that what people say they will do is not necessarily what they will do. People can use opinion polls to ‘send a message’. Saying you will vote ‘Yes’ is a risk-free way of causing alarm amongst those annoying English without having to live with the economic risks of actually voting ‘Yes’.

This was noticeable during the disruption due to fuel protests at the start of the last decade when the polls lurched violently against the incumbent Blair government, who nonetheless romped home at the next election.

A related point is that people also can feel ashamed about what they really think or intend to do, so in a research survey they can answer what they think they should say. Again, this is where the data collection method can kick in. Do you really want to seem unpatriotic to this nice Scottish interviewer on the phone by admitting you are actually going to vote ‘No’?

People can use research interviews to posture, play a role or say what they think is acceptable. That is why questionnaire design is so vital. As this clip from Yes Minister demonstrates brilliantly, it is surprisingly easy to get the result you want if you really try.

It’s all about the gap between our super-ego and ego: the socially acceptable façade versus what people actually think and do.”

I can recall being at many a focus group discussing TV, watching a wary group of viewers politely claim that all they see on TV is the ‘news and a few documentaries’ and agreeing that most TV is rubbish. After a relaxing few glasses of wine many are shown to have an encyclopaedic knowledge of Coronation Street plot lines or X Factor contestants.

The political equivalent is that people tend to answer opinion polls with their heart but vote with their head (or more cynically, their wallet). It’s all about the gap between our super-ego and ego: the socially acceptable façade versus what people actually think and do.

That is the barrier media researchers are also attempting to break through. That is why electronic measurement is the default for TV, that is why big data holds such allure – measuring ‘real’ as opposed to claimed activity. However, as my ‘Big Opportunity‘ report for the IPA last year pointed out, big data brings its own challenges.

Above all there was one particular area where the huge focus on polls in the last few weeks made me very nervous as a media researcher: the credibility of the research industry as a whole. Most of the time, to the general population, opinion polling and media research is like being in a 747 – you don’t really notice the flight or particularly understand how it works, and you don’t need to.

However, when you hit turbulence the emergency instructions are anxiously studied and you start to wonder how the metal beast ever got in the air in the first place. So when an election result shows that polls are wrong this creates a ripple effect throughout the whole industry.

So Thursday night was a relief for media researchers; as I can remember, the fallout in 1992 when nearly every pollster got the General Election result disastrously wrong. Many people did not want to confess that they were going to vote for that grey-faced man in his underpants from Spitting Image, so they did not admit it to the nice working class interviewers.

The polling industry learnt a lot from that catastrophe, with many now weighting to past voting behaviour. The problem for the Scottish Referendum is that there is no ‘last time’ to weight to.

As a young media researcher in ’92 I spent a long time reassuring clients who argued that ‘you got the election wrong, so how do I know my readership/listening figures are right?’ For a while the trust that kept media research ‘in the air’ was evaporating.

Closer to home, in January 2002 the TV ratings were thrown into crisis after a switch of contractor. For much of the general public it was news that the TV ratings even was a survey, let alone understanding how that survey worked. Things settled down eventually but BARB board members at the time still develop an eye twitch when recalling those dark days.

So let’s raise a glass of Glenfiddich to the opinion pollsters. Thankfully, whilst most of the later Scottish polls underestimated the Yes/No gap, they did at least call it the right way, which will make all researchers’ lives a lot easier.

Richard Marks is the director of Research the Media. Find out more here.

Twitter: @RichardMlive

To get all the latest MediaTel Newsline updates follow us on Twitter

Media Jobs