Slow emotion replay

Slow emotion replay

From Spotify analysing your musical mood, to in-car facial recognition, is emotional targeting of advertising logical – or even ethical? Research the Media’s Richard Marks investigates

In the early 21st Century the human race faces an increasing number of ethical issues about what technology can potentially do. When it comes to GM food, human cloning, surveillance and many other issues, we are at something of a crossroads.

We are also at a similar point in the development of the media industry and specifically the targeting of advertising. Is it OK, for example, for your voice assistant to listen to your conversations and recommend products as a result? Is that a ‘frictionless consumer benefit to facilitate busy lives’ or just creepy and intrusive?

One topic that is both fascinating and concerning at the moment is the area of emotional targeting: establishing someone’s mood and then targeting advertising accordingly. Specifically, three questions: Is it logical? Does it actually work? And at what point does it overstep the mark?

Be warned dear reader, you may well experience a growing sense of paranoia as this article reviews what’s out there, but let’s start out on some fairly safe ground. On the face of it, it does seem natural and sensible that media planners should take into account the emotions likely to be provoked by a piece of content or an environment.

Some prominent publishers have recently developed ad products that evaluate the mood that a piece of content is likely to provoke and therefore can guide the placement of emotionally appropriate advertising. The New York Times’s initiative is called, fittingly, ‘Project Feels’.

Meanwhile Digiday reports that, since 2016, USA Today Network has been categorising its content by topic and tone, and scoring it based on the likely emotional response. To me this seems logical and not a million miles removed from Channel 4’s highly successful (and Mediatel Award winning) ‘Contextual Moments’ initiative.

[advert position=”left”]
However this is working at a content – as opposed to consumer – level. Attending the recent BAFTA launch of this year’s IPA Touchpoints Daily Life data, I was reminded that respondents record their claimed mood across the day, measured by moving a slide in the online diary from negative to positive. At the event, MG OMD showed how they have been using that mood data to ‘align brands with moments that matter’, using what they jokingly referred to as a ‘Grump-o-meter’.

At this broad level, emotional targeting still appears to be fairly benign and just leaves the question of whether you think it works or not. Will advertising be more effective if it seeks to reflect people’s mood or attempts to alter it? Most breakfast DJs work on the principle that being irritatingly upbeat and perky is what people want to shake themselves out of their morning fugue. However the longest running national breakfast DJ is actually Sean Keaveny on 6Music, legendary for his morning grumpiness, so clearly one size doesn’t fit all. He is about to be succeeded by the sparkier Lauren Lavern, which will prove a difficult mood shift for some listeners.

OK, so if we accept that not everyone is in the same mood at the same time, how about targeting people individually?

Spotify think they have this cracked by examining your music listening to determine your mood and serve you targeted advertising accordingly. One advertising application classifies you as a listener to a particular genre and then notices when you switch genres and assumes something must be going on with you emotionally (presumably if a Steps fan suddenly starts listening to Joy Division they will also call the Samaritans).

This follows the same creepy ethos as Spotify’s own ad campaigns last year which highlighted strange listening patterns with public messages to their listeners: “Dear person who listened to the ‘Forever Alone’ playlist for 4 hours on Valentine’s Day, you OK?” Some saw this campaign as innovative and funny, others as cruel and disturbing. Possibly the first brand campaign to attempt to shame its own users.

By using individual level data, this is indeed walking a fine line on data privacy but, more importantly, is targeting individuals based on their (assumed) mental state morally right? As a recent Guardian article points out, targeting people when they are depressed in order to cheer them up might seem a nice thing to do, but is this offering benign retail therapy or exploiting the emotionally vulnerable?

As a side note, I’d also contest the concept that there is a necessarily a correlation between music mood and personal mood. Deezer is using AI to categorise music by mood and intensity, based on both lyrics and music. Fascinating stuff, but does it really have an advertising application as well?

From my perspective I am more likely to play ‘There Is A Light That Never Goes Out‘ or ‘Creep’ and think ‘Thank heavens I am not as depressed as that guy.’ I actually tend to avoid depressing music when I am feeling low. Is music a mood-amplifying or a mood-reflecting drug?

So, the danger with emotional targeting is not just ethical, it is also that we end up with incredibly sophisticated targeting systems based on very simplistic assumptions.

The danger is that we end up with incredibly sophisticated targeting systems based on very simplistic assumptions”

But wait! Rather than just infer someone’s mood from what they are reading, watching or listening, what if we could actually read their facial expression? Wouldn’t that be great? Advances in facial recognition have moved beyond simple identification to mood evaluation. Is that a smile I see on your face? Frownlines? Why not buy a Cornetto?

Most of the facial applications I have encountered so far are focused on survey applications like ad testing, but how long before our webcams and CCTV are appraising our mood? Affectiva – a US tech company – has successfully embedded facial recognition into the dashboard of cars. Your car can monitor you for signs of drowsiness, sell you coffee and, presumably, if you look too drunk or angry to drive, the car could simply refuse to be driven.

For the last few years the most common reference for anyone talking about targeted advertising at a conference has been to show or refer to the scene in Minority Report where the personalised ads keep changing as Tom Cruise moves past digital billboards. This reference is so ingrained now that at an event I attended last week someone simply said that the future needs to be ‘more like Tom Cruise’. I assume this wasn’t a call for more scientology and multiple divorces.

If we are moving to mood detection for targeting then perhaps a scene from the original Blade Runner may become the new go-to adtec conference video: Deckard administers the ‘Voight Kampff test’ to look for a ‘blush response’ from Rachel that will determine whether she is a replicant.

Meanwhile, in the 2017 Doctor Who episode ‘Smile’, Emoji robots built to serve human colonists decide that their human masters do not look happy enough, so decide to put them out of their misery by euthanising them. Actually I doubt that one will be used to support emotional targeting.

Human beings reduced to living emojis? We already live in a world where we have to be careful what we write online or say out loud, lest it trigger an advertising onslaught or inadvertently alert a security services algorithm.

Will the same now apply to being careful about our own faces betraying us?

Some people block out the webcam lens on their laptops with Elastoplast for fear of surveillance – famously in a picture of Mark Zuckerberg’s desk he clearly does exactly the same thing.

Perhaps the next level of (justified?) paranoia will be to print out a high resolution picture of our best poker face and make that into a mask to protect us from emotional scrutiny?

But let’s shiver and then step away from this paranoid line of thought. To return to the aforementioned Touchpoints, the event began with an engaging thought piece from Rory Sutherland of O&M. He claimed that mass targeted advertising was more believable to consumers – and therefore effective – as they felt that they were less likely to be lied to by more public and open forms of advertising than on an individual (targeted) basis. Advertisers would be more honest in public.

Heaven knows how the average consumer would feel about emotion-triggered targeting. Perhaps before we disappear even further down the rabbit hole of micro-targeting with all its related ethical issues, the question has to be: is it actually worth the effort?

Just because we can do something, should we do it?

Richard Marks is Director of Research The Media @Richardmlive

Media Jobs