| |

Five things media research must learn from the election

Five things media research must learn from the election

Media Native’s David Brennan explains what the media industry – with its reliance on robust research – needs to understand.

I know, I know – this is somewhat belated and I’m sure you want to forget all about this crazy election stuff (as do I) but after the double whammy of the Scottish referendum followed by the UK General Election, there are some important insights which we need to digest. After all, how often does the behaviour of tens of millions of people come under such intense scrutiny?

Now that the dust has settled, I think there are five key headlines that we can take away and learn from. At least until the next election in five years’ time (or possibly the European referendum sometime next year).

Market research takes a hit

Elections always offer the polling companies sleepless nights. Will they get it broadly correct (e.g. 2010) or embarrassingly wrong (e.g. 1992)? Well, I think it’s fair to say, 2015 was a stinker…which means it reflects very badly on the market research industry as a whole.

After all, if we can’t get a simple question like “who will you vote for?” correct, how on earth can we be expected to accurately measure consumer sentiment, marketing impact or media usage properly? When the final vote falls well outside acceptable margins of error, the result is often used to denigrate research as a whole.

Of course, this is totally unfair. Or is it?

The pollsters ask a simple question, based on a highly salient subject within a limited choice hierarchy…and get it hopelessly wrong. Various reasons have been given – from difficulties in finding representative samples to the different methodologies used right the way through to the context in which the questions are asked. I identified 10 different potential reasons from reading just three articles.

One particularly compelling argument was that the polls ask lots of questions about policy, and then ask how respondents are likely to vote; such contextualisation can sometimes resemble Sir Humphrey Appleby’s cynicism towards polling in ‘Yes Minister’, and will often drive respondents to state a preference they don’t really ‘feel’ (see point five).

Ultimately it is about the polling organisations asking a simple question and assuming there is a simple answer. Something I think our own industry does far too often. We ask people to say what they think, without digging deeply enough into what that response really means.

As Sky News concluded in its analysis of the polling fiasco; “It has never been cheaper to run a poll over the internet or phone. But qualitative data – that rather old fashioned practice of going out and speaking to people, rather than filling in a checklist – was forgotten. It’s due a comeback.”

In defence of our industry, I think when we apply research to marketing or media issues, we are often far more sophisticated and nuanced in our methods. We often employ a number of innovative methodologies to not only get an accurate reflection of what consumers think, and how they behave, but we also look at the reasons why and explore drivers in depth.

That said, our increasing reliance on the numbers – and the headlines they can generate – means we are often just as guilty as the pollsters in assuming simple answers to decontextualised questions can somehow lead to greater knowledge…or more accurate predictions.

Big data let us down

OK, if market research can’t do the job, surely we have a ready-made alternative in big data. After all, huge amounts of data were collected independently of the polling companies, and yet they don’t appear to have provided any more illumination.

We heard a great deal about the data specialists that had been employed by the parties to mine insights from the billions of interactions, posts, likes and shares that are readily available to them. If big data was to ever prove its worth as a predictive tool, surely this was the ideal opportunity.

To quote Sky News again; “This was an election obsessed with data, whether it was professional polling companies, the millions that Lord Ashcroft spent on his own polling (also a long way out), or big data analytics tools like NationBuilder (used by the Lib Dems and Labour, so maybe avoid that one).”

If there was a more accurate prediction of the election outcome emanating from the petabytes of data at the analysts’ disposal, or proof that it made a significant difference to anybody’s voting intentions, please show it to me. I must have missed it.

The rational delusion

At the heart of this debate is the age-old belief in the rational mind. We weigh up all the alternatives, and then we work out the best selection based on rational analysis. It doesn’t work that way in choosing who to vote for – a fact well supported by academic evidence. The polls and the analytics failed to really tap into the undercurrents driving this election.

Emotion is the key and all the rational analysis in the world won’t stop fear, or joy, or hope or anger – or even ‘just liking’ a particular party leader – leading the decision. It accounts for the ‘shy Tories’, the ‘socialist UKIP-pers’ and the upsurge in nationalist sentiment.

Facts don’t come into it; a good narrative can make all the difference in the world. The world of advertising has known that for a decade or two.

It was telly wot won it!

Even though newspapers engaged in ever more desperate attempts to persuade their readers to vote one way or another and Twitter went into the occasional meltdown, it appears to have been television that had the most impact on voters’ perceptions of the main parties.

A study released by Other Lines research company just before election day reported almost two thirds of voters claiming TV influenced their decisions – with the TV debates recorded as particularly important (if you’re reading Mr. Cameron, please take note!); the party political broadcasts far less so.

Still it is interesting that the medium which has to provide a more balanced and objective framing of the issues is the one voters choose, rather than ones which are more in keeping with their natural political preferences.

One in four claimed newspapers were influential, which may be reflected in the finding that around half of readers did not vote the way their preferred newsbrand advised them to, despite that advice being delivered more stridently than at any previous election.

Even the winners of the election are seeing the way the tide is turning, with a prominent Tory blogger proclaiming “as the print media’s influence wanes, the degree to which they seek to use it increases.”

The problem with social media – preaching to the converted?

The same study highlighted the even lower percentage of voters claiming any influence from social media (only 7% claimed Facebook was influential; 4% for Twitter). This finding reflects my own experience; social media provided me with plenty of confirmation for my own beliefs, and some occasionally effective ammunition to pour scorn on those I don’t agree with.

A news stream consisting of “I totally agree with that” or “what are those bastards saying now?” does not make for a meaningful debate.

But wasn’t this supposed to have been the ‘social media election’? Perhaps, like the ‘year of the mobile’, we just need to wait a little while longer.

I read that Facebook claimed to have been able to use its data to predict the polls more accurately than the pollsters, although I haven’t seen the outputs yet – certainly not before the election.

But, then again, that might take them into the privacy debate if they went too public with such findings.

Just before the election, academics claimed to have developed an algorithm to predict voting via Twitter analytics, but given they seem to have forecast a Labour surge in the days before the election, I’m not holding my breath…

Roger Gane, CEO, OMG!, on 26 May 2015
“The clue is in the title of the type of research used - these were opinion polls. Opinions are not behaviour - and they're not even attitudes - an individual's attitudes are relatively fixed over the short-term, whereas opinions can change very quickly. We'll have to wait for the Market Research Society's review in order to (perhaps) find out what went wrong. In the meantime here is a hypothesis: people looked at the hung-parliament predictions (not always made by the polling companies) and wondered how a viable government could emerge given the apparent neck-and-neck standing of Conservative and Labour, the collapse of the Lib Dems, the surge in SNP, and the UKIP wildcard and decided it would be safer to have a strong, single party government. Be careful what you wish for...”

Media Jobs