|

Let’s get clear on attention metrics

Let’s get clear on attention metrics
Opinion

Brands are rightly asking what the best approach is when it comes to attention, so time to get specific about terms and how to decide which vendors to work with.


There’s been a divide brewing for some time between vendors that deliver “attention metrics”, which are based on passive observations of human behaviour as training models (predominantly eye-tracking technology), versus those that are measuring device metrics and “outcomes” data to correlate effect.

This divide is getting confusing. Brands are rightly asking for the right or best approach and they’re hearing a lot of PR noise coming from the vendors themselves, as opposed to third-party validation.

For the purpose of clarity (in an industry that loves to overcomplicate things!), I think it’s time to draw some lines and get specific about terms:

> Attention metrics: These are informed primarily by the use of biometric ground-truth data (predominantly eye-tracking) to inform predictive models and calculate a metric.

> Media quality scores: Metrics that aim to define the quality and (claimed potential) effectiveness of a media placement through the aggregation of measurable device signals (size of ad, placement on page, viewability, time on screen).

There is also a third category forming: vendors that use a static set of eye-tracking data to augment or support a media quality score. It’s an input as opposed to an output. Personally, I’d categorise these under “media quality scores”, as visual attention is not the goal of the metric.

Making the right choice

The two things are fundamentally different and the vendors that support them are going to war on social media over who is “right”.

In reality, when you test with both methods, like we’ve been doing extensively at Havas for the past few years, you find very similar things. Both raise the priority of better-quality media.

For example, if you optimise to Lumen’s measure of percentage viewed or seconds of attention, or Adelaide’s AU, what you see is that your placements are for the large part more viewable, providing bigger/more screen real estate, less clutter, better domains in better content etc. In short, better “quality”.

As for which is better — quite simply, neither is at this stage. They both perform a very similar function and both provide a good proxy for “media quality” to an extent.

Choosing a vendor

So how do you decide which vendors or approaches to work with? A set of considerations are guiding Havas in making this selection:

> Transparency: Will the vendor openly discuss with you how its metrics are calculated and how it weights the inputs? If someone is hiding behind “AI” as an answer, be cautious and test more rigorously.

> Validation: Has the vendor been validated by a third party as opposed to its own case studies? If it won’t share its method in depth with you, will it at least submit to a third-party audit of its approach? If not, again, do more of your own rigorous testing.

> Commercial: The cost of using these measurement providers can add up and it all equates to more “non-working media” cost. Do you understand enough about the value? Does it negate the need for other data or technology partners in the chain?

> Service: People are important. Will the vendor work with you to help you further your own understanding of the value of its metrics? Will it be on hand to resolve issues quickly? More tagging in digital media always leads to issues operationally, so does it have a good support system in place?

> Integrations: How well is the vendor integrated across the adtech ecosystem? Most providers of these new metrics are relative “new kids on the block” and it takes time to integrate with the big players — supply-side platforms, demand-side platforms, media owners etc. If it is not widely accepted, it can often mean that only a small proportion of what you’d like to measure is achievable.

> Testing: The only way you’ll find out is testing. Test with multiple providers and decide for yourself, but ensure you have a sensible and rigorous testing framework in place that’s designed to measure brand and business outcomes, not just media deliverables (cost per mille, click-through rate, view-through rate etc). The testing should be designed to answer if investing in better-attention or higher-quality media pays back more for brand and businesses than buying cheap/low-cost/low-quality media.

Better experiences

Whether you’re using or looking to use attention metrics or media quality scores, both should be commended. Time and time again, I’m seeing cases and research that show these metrics are pushing brands to invest in better media experiences for consumers and supporting publishers that provide those experiences.

An increasing body of research is growing by the day to support their usage and, with more testing, brands will be able to decide for themselves which metrics tell them more about the probabilities of success for their campaigns.


Jon Waite is head of media experience development at Havas Media Network

Media Jobs