|

Won’t somebody think of the humans?

Won’t somebody think of the humans?

AI interfaces and chatbots could be revolutionary for brands – but only if they strike the right balance between human and machine interaction, writes Mindshare’s Jeremy Pounder

‘Fxxx my robot pxxxy daddy I’m such a bad naughty robot’. So said Microsoft’s AI chatbot Tay after 24 hours learning from humanity’s finest in the Twittersphere.

And then this month the self-driving car claimed its first victim after a Tesla enthusiast was killed when his car crashed in self-driving mode.

As we begin to interact directly with AI in more and more scenarios, we can expect more of these mishaps, albeit rarely with such tragic or comical consequences.

More likely in the short term is the risk that through conversations with AI we’re left feeling somewhat dissatisfied, empty or even dehumanised. A conversation with a chatbot, for instance, that underwhelms, irritates or feels odd.

And yet chatbots are being held up as a transformative way of interacting with businesses and brands. For customers they promise a better, faster experience – less time spent waiting for an operator to become available or navigating through call centre menus; problems being resolved more quickly.

For brands the promise is greater operational efficiencies on top of a better user experience – if bots can handle basic queries about billing, payments or deliveries, call centre operators can be freed up to handle the more complex queries adding greater value to the business overall.

So how can brands build bots that leave people feeling more not less human as a result of the experience? How can brands build bots that live up to their promise?

These are the questions we’ve set out to answer through our recently published research collaboration with Goldsmiths, University of London and IBM Watson – Humanity in the Machine.

[advert position=”left”]

Firstly, brands need to focus on building trust. Through a series of biometric experiments measuring stress levels, we found that users are less forgiving of machines making mistakes than humans.

That means brands need to be conservative in their ambitions and in their early iterations make sure that their bot doesn’t get much wrong in order to build up trust. That may mean asking more questions than is technically required to provide an answer, in order to build confidence in the results, as AI medical service your.md does.

But intriguingly we found that consumers are often more trusting of bots around sensitive information than they are of human customer service operators.

25% say they are happier to give sensitive information to a chatbot. For ’embarrassing medical complaints’, twice as many people prefer talking to a chatbot than a human than for ‘standard medical complaints’.

People are prepared to trust chatbots, and so brands now need to make sure their development decisions build on this rather than undermine it.

Secondly, brands need to align the bot’s tone of voice with their values without coming across as trying to be too ‘chatty’.

Working with IBM Watson we set out to explore the tone of voice issue, by testing two alternative banking bots with very different personalities: one was chatty, informal and conversational; the other was more straightforward, with a serious and functional tone of voice.

Many found the chattier version unnecessarily off-putting, patronising or even weird. As one respondent put it ‘The chatty one is like my dad when he uses emoticons, it’s creepy.’

Mindshare img

Brands need to give the bot a tone of voice which expresses their personality in a way that is flexible, contextual and personalised to different users and different situations. This will mean using copywriters alongside programmers to create consistent style and tone.

Finally, brands need to avoid making the bot ‘human’ an end in itself. What defines a ‘human’ AI does not depend on how human the AI appears to be, or how life-like its interactions are. What defines a human experience is the experience itself – it is measured by how a person feels when dealing with the AI, and not some intrinsic humanity in the technology.

A ‘human’ experience is defined by how the user feels, not how ‘life-like’ the bot is.

Bots should aim to use context and emotional understanding to deliver a ‘human’ experience by meeting the user need. In doing this the style of the bot should ideally go unnoticed.

If it feels too ‘robotic’, then interacting with it leaves the user feeling dehumanised. If it’s too ‘life-like’ then the user can be left feeling patronised or even disturbed.

The challenge is to get the balance right and leave the user feeling as though they have had a human experience. And, crucially, to avoid falling into the ‘uncanny valley’ through creating a bot that feels creepy by attempting to emulate humanity.

We’re all going to be getting used to communicating with AI in one form or another over the coming years. For brands, there is much at stake here.

Get it right and it’s a mutually beneficial situation where customers enjoy a better service experience which enhances their humanity and brands can cut their operational costs. Get it wrong and you can end up with a racist, sex-obsessed bot spewing obscenities on behalf of your brand.

Jeremy Pounder is futures director at Mindshare


Follow us on Twitter: @MediatelNews

Media Jobs