| |

Counting coins

Counting coins

The burning desire for cross-platform measurement of AV advertising is clouding media planners’ judgement when it comes to understanding the concept of value, writes Thinkbox’s Matt Hill

The other week my son Charlie (age five) chanced upon a pound coin which had fallen down the back of the sofa. Eagerly saving up for another Nexo Knights Lego set, he was keen to add this £1 to his money box.

You can imagine his dismay when I told him that, while I understood the concept of finders keepers, the money was mine and he had to give it back. Cue showdown.

As I am well versed in the destructive fallout of a full blown confrontation, I searched for a way to side step the disagreement. Reaching into my pocket, I found a couple of 10 pence coins and offered a trade. With a big smile Charlie readily made the exchange. Silly boy.

Now, it’s understandable that a five-year-old hasn’t grasped the concept of value yet, but it’s also something we’re grappling with in media planning.

Specifically, I believe the burning desire for cross-platform measurement of AV advertising is clouding our judgement when it comes to what data can and can’t be combined meaningfully.

We risk accepting a wrong answer because there is no other answer (yet). I’ll explain.

Techedge, one of the key software providers for TV planning, has created a new tool that allows agencies to combine data sets from video platforms such as YouTube and Facebook with BARB data for TV.

In principle this enables the optimisation of AV ad campaigns across all platforms. It is an interesting initiative and promises the current holy grail of all the planning teams I’ve spoken to, but I don’t think it’s possible to fairly and meaningfully do this with the available data.

Apples and pears

Techedge uses reach curves provided by Google or Facebook as the basis for how they estimate total reach across different platforms.

Ignoring the serious issues around media owners providing their own (unaudited) data to combine with JIC data – not least Facebook’s apparent claim to reach more people than actually exist – we’re starting with apples and pears.

[advert position=”left”]

In order to qualify as a viewer who has been exposed to an ad on TV, BARB’s basis is a full ad view. For Facebook or YouTube this might be a start or a two second view of 50% of the pixels.

The reach curve gradient generated by Facebook or YouTube for full ad views will obviously be a lot shallower than the curve for ad starts because full views are rare for them.

But you can’t add five people who have watched a full TV ad with five people who have watched some seconds at the start of an online video ad and claim to have reached 10 people. All reach is not equal.

It counts numbers of coins, not their value

The system tries to account for differences in impact/impression ‘quality’ (i.e. view length) by using an ‘impression quality factor’ (IQF). This allows the agency to decide how to factor a Facebook ad view/start vs. a full TV ad view.

In theory here you could use the average duration of an ad view on Facebook (let’s guess three seconds) and give Facebook an IQF of 10% (three seconds is 10% of a full 30 second spot).

But this is meaningless maths as we’re counting coins not their value. Do 10 x 3 second views of the beginning of an ad equate to one full 30 second view? Nope.

It ignores crucial environmental factors

Should we try to put online video ad views on the same page as TV ad views in the first place, even if we could measure them on a like for like basis and used the same creative assets?

There is increasing evidence that many online video opportunities would benefit from a very different creative treatment from the TV ad (e.g. shorter, silent etc.).

There are so many environmental quality factors to grapple with: screen size; sound; passive vs. active mind state; content context; content quality; shared vs. solus viewing; in home vs. out of home; and let’s not forget about brand safety.

Of course, we should be integrating the entire range of AV advertising in our planning but not in their currencies. Where would we stop? Fancy adding in silent ‘video’ escalator panels? Thought not.

Campaigns work at their best when TV and online video are used together. But don’t put them on the same page as interchangeable options in a cost driven optimiser. They are not substitutable.

I understand the challenges and sympathise with planners being asked to demonstrate who they’ve reached with their multi-layered campaign or what the most cost effective means is to reach their target market through AV.

But we mustn’t think that, because we can combine different data sources into one system, it’s telling us something meaningful.

I’m off to try and convince Charlie that 100 random pieces of Lego are the same as General Magmar’s Siege Machine of Doom and I might get away with it. But then he is only five.

 

Matt Hill is research and planning director at Thinkbox

AdrianEdwards, UK Managing Director, TechEdge, on 03 May 2017
“Hi Matt,

Apologies for the tardiness of our response. You raise a number of concerns here regarding cross media optimization, and specifically the KXM solution that we are offering to the market. Fair enough - it's your job as the representative the marketing body for commercial TV in the UK.

Everybody in the industry can agree that working on single source data that captures all media consumption in a fair and equal way would be ideal. We can also agree that we don’t live in an ideal world, and so we have to make compromises. So far we have waited 20 years for that single source data to arrive. In the meantime TechEdge is offering its clients an interim solution based on the data is available.

All media agencies have access to TV, Facebook and YouTube data, and use a multitude of mathematical models to combine this data to calculate total cross-platform and unique reach by platform. As a consequence an advertiser will get as many different answers to the same question as there are agencies in the market.

As a software provider TechEdge does not have an opinion on whether one media or platform is better or stronger than the other. We leave those decisions in the hands of the experts, which are the media agencies and the advertisers. Our job is to provide a software solution that is unbiased and transparent, encompassing the different characteristics of each dataset. We also handle the entire logistic workflow of managing multiple data sources, enabling the end users to address cross media across a vast and complex media landscape.

The end users of this data are ultimately the media agencies and the advertisers. They decide if a dataset is of a quality which they are prepared to pay for and use. Nobody is forcing the media agencies or advertisers to use the Facebook or YouTube data. By clearly illustrating in our tool the differences in each platform’s ability to build reach alongside the associated cost we believe we are opening up a debate which is much needed in the industry. By adding features that allows the media agency to factor the data or change the probability scores, we enable agencies to differentiate themselves based on the research that they have available to them.

All data that is available to the industry is based on probability. A person’s TV viewing across a minute is attributed to one channel which had the majority of seconds viewed. There was a probability that the person saw the commercial. A digital impact has a probability that it was viewed and a probability of who that person was. The same goes for radio, cinema and print, which for many years traded on distribution numbers and not readership. The debate is of course how strong and valid those probabilities are and what value it has. It is purely a mathematical exercise.

TechEdge has used its respondent level platform and “Gold Standard” TV calculation engine to define a model and a logistic framework. We have then engaged with media agencies to discuss how to define a realistic and comprehensive – yet simplistic and understandable – model for cross-media budget allocation. Our only requirement has been that any data added to the system by default should be empiric and preferably audited, and that any model parameterization should be transparent and based on studies rather than subjective assumptions.

We welcome Thinkbox and any other organization in the media industry in participating to build a cross media budget allocation for the future based on such principles.”
MattHill, Director of Research & Planning, Thinkbox, on 19 Apr 2017
“Hi Oliver,

Apologies for the delayed response, I’ve selfishly been away for Easter hols. You rightly point out that I don’t represent BARB. But I do have faith that as a JIC they will be exploring all options on the table for how they can provide the fairest and most accurate currency for TV advertising.

Also, to be clear, I wasn’t calling for a higher degree of accuracy for other media. I was pointing out that you shouldn’t put a BARB TV set ad view, of which the vast majority will be full ad views, on the same reach curve as other types of video which are measured to a lower standard and viewed in a completely different environment. They are not the same.

I imagine by now that these comments are essentially a private correspondence between us! If you’d like to carry on the discussion, drop me a line at matt.hill@thinkbox.tv.”
OliverTobias, Head of research, Aol, on 11 Apr 2017
“Hi Matt,

Interesting response - you avoided answering my primary point and instead sought to respond to issues I didn't raise any objections to. I definitely didn't say the BARB panel was unfair, or sought to question it on the basis of its costs. My question related to it's accuracy and whether it's methodology is still fit for purpose.

I'll choose to ignore the ridiculously lengthy amount of time Project Dovetail is taking to deliver the long-promised cross-device data (though worth noting that Nielsen has already begun releasing the first tranches of equivalent data in the US), and return to the point in hand which you chose to ignore. Whilst working at Sky 5 years ago, I was part of the team involved in setting up and analysing data on their panel which was a hundred times the size of the BARB panel (500k homes), yet the data was still able to be processed on a second-by-second level. Given the improvements in technology in the 30-odd years since the BARB panel started and knowing it is perfectly feasible, it strikes me as inadequate that BARB has not moved with the times and moved to a more accurate measurement methodology. Why is this? I know you don't represent BARB, but you are a pretty important stakeholder and user of the data and as a cheerleader for the TV industry, in my opinion you and other people within the industry should be holding them to account to deliver the same degree of accuracy which Thinkbox are calling from other media, in particular digital.”
MattHill, Director of Research & Planning, Thinkbox, on 10 Apr 2017
“Hi Tim – thanks for the comment. I certainly wasn’t trying to disparage or dismiss OOH! My bad. I was simply pointing out the flaws in lumping all sorts of video together.

Hi Robert – the anecdote about my son was just an attempt at a lighthearted framing device, not a literal attempt at equivalence. And BARB is not old fashioned or flawed – it is high quality, rigorous statistical science.”
RobertCatesby, Marketer, Independent, on 10 Apr 2017
“Matt's summary of this issue by likening it to dealing with a 5 year old is the epitome of the patriarchal bullying the TV industry thinks will get it out of its current problems. The comparison is not apples with apples definitely. He is comparing a system that measures in an old fashioned way an OPPORTUNITY to see a TV execution ( based on what a small sample of homes do) and compares it to what every single viewer does on a platform like YouTube. The claim that every 30 second ad on TV is viewed in totality is as weak to rely on these days as the Lego structure a 5 year builds. All opportunities to see are definitely not equal : the power of how creative engages on TV or posters or mobile is vital to figure out and this we learn from MMM models or from the work of independent research companies such as Kantar or Gfk. It's not good enough to hide behind the wall of flawed methodology in measurement for TV whilst throwing stones at everyone else. That game is up and savvy marketers see through it.”
TimLumb, Insight Director, Outsmart, on 07 Apr 2017
“Hi Matt

I was happily reading your article and nodding with agreement, particularly the need for JIC-approved vs self-audited data. But your comment about digital escalator panels was a little dismissive in tone. Why exclude Digital Out of Home (DOOH) into the AV mix for planning and currency?

Granted there is rarely audio in DOOH, for good reasons. Nonetheless it comes with a range of flexible capabilities for planning, targeting, and deploying ads that reach people when they are out and about (which is about 3 hours a day on average).

Anyway, I could be slightly dismissive too, and point to those TV ads that are so loud I have to keep ear-defenders handy. But that would be hypocritical and I’m bigger than that. Oh wait.

Of course we all want a cross-platform AV measure but, as your article notes, there are so many factors at play it will inevitably be piecemeal and imperfect – that doesn’t mean it is valueless. Planners have been grappling for years with the ‘environmental quality factors’ you mentioned. Figuring out what reach and value really mean is part of making informed decisions. How do you know their judgement is clouded?

As I’m on a roll, data from OOH’s very own IPA-backed JIC, Route, shows that a 2-week campaign on the escalators in just Oxford Circus tube station will reach 138,000 people an average of 12 times. These ads are also more likely to reach 16-34 ABC1s and more likely to reach those light TV viewers, who (horror!) may not have seen the TV ad yet. Or maybe they have seen it but need a little reminder… seeing as they are so close to the point of purchase!

EscalatorGate aside, a single cross-platform AV measure may be too much to ask. Transparent, robust and regulated datafeeds are not.”
MattHill, Director of Research & Planning, Thinkbox, on 07 Apr 2017
“Oliver,

ISBA and the IPA sit on the BARB board to ensure the interests of marketers and their agencies are being looked after, that's the beauty of a JIC.

BARB’s measurement of TV adverting isn't perfect, but it's open and as fair as it can be – and agreed on by all parties in the JIC. TV ads that are fast forwarded aren't charged for, although clearly have value. Ads watched at normal speed in recorded programmes that are over 7 days old are free. Out of home TV viewing isn't measured and so is free. The basis for BARB’s position on what constitutes an ad exposure is what they can reliably measure, as accurately and as fairly as possible. I can’t think of any media measurement currency that holds itself to higher standards.

To suggest that ‘a minute attribution methodology is as inadequate for TV as a 2 second view on digital’ is just plain wrong. There are different degrees of inaccuracy. The average view through rate for a TV set ad would trounce the view through rate for a Facebook video ad view. As such you shouldn’t try and put them together on the same reach curve, or use them as interchangeable options in a planning tool. This is my point.

As for RSMB’s impartiality, your views are unfounded. Auditing the statistical accuracy of BARB is what they’re paid by BARB to do. Would you expect them to do it for free? Of course not.”
OliverTobias, Head of research, Aol, on 06 Apr 2017
“You can say it's not hidden, but I bet if we were to ask the CMOs of the 20 biggest TV advertisers or even their agency directors if they are aware of it, I would wager that fewer than 10% would say they were. It gets a tiny reference in the 'glossary' section on the BARB website. To call it an imperfection is a vast simplification - regardless of the sheer volume of those exposed to the full ad which you say 'evens itself out' over the course of the campaign, this doesn't account for the different profile of those viewers nor does it account for partial views of an ad where viewers might have missed the branding or call to action, which is where your rationale for comparing to digital falls down. A minute attribution methodology is as inadequate for TV as a 2 second view on digital when the media industry's call for accuracy across all platforms is so necessary.

As an aside, given that RSMB are a BARB contractor, I'm not sure one could accept any audit into the reliability of the data conducted by them as wholly independent.”
MattHill, Director of Research and Planning, Thinkbox, on 03 Apr 2017
“Hi Oliver,
As a JIC BARB don’t ‘hide’ anything. Their measurement methodology is fully transparent and ratified by the IPA and ISBA. There is indeed a minute attribution effect that comes into play when viewers switch channels during adverts, but it’s a measurement imperfection that works both ways. Some adverts gain viewers who have switched into them, but these viewers aren’t included in the minute audience. RSMB have conducted an independent audit that shows across the average campaign these factors cancel each other out.

My point is that a TV ad exposure is based on the average number of people watching a whole ad, whilst online ad views are based on starts or 2 second views. As such they shouldn’t be put on the same reach curve.”
OliverTobias, Head of research, AOL, on 31 Mar 2017
“Matt, I think you're being slightly disingenuous when you talk about BARB's basis being a 'full ad view'. The fact is that their minute attribution methodology means that some viewers are attributed to having watched an ad when they were in fact watching a completely different channel. This fact is very well hidden by BARB and the TV industry, so when calling for better transparency from the big digital players (which is definitely required), you'd have more credibility coming clean about this serious methological flaw which undermines BARB's oft-quoted 'gold-standard' measurement.”

Media Jobs