The rise of ‘eception’ and the ethical issues arising from humanising AI in customer experience

The rise of ‘eception’ and the ethical issues arising from humanising AI in customer experience Howard Williams works in customer experience at Parker Software. He leads the marketing activities of Parker Software’s global customer team, with a focus on the consumer, their experience, and how it can be continually improved. Parker Software specialises in the development of live chat and business process automation software. The company specifically focuses on businesses operating in the e-business, marketing and retail sectors.

You land on a website one day and “Jay” sends you a welcome message and an invitation to chat. Jay has a smiley headshot displayed and seems well-spoken enough. But is Jay a human customer service agent, or a bot?

Even a few exchanges might not be enough to identify Jay’s true nature. You may have to say something completely unexpected to detect any indication of whether Jay is really there or running via AI.

The truth is that humans are easily capable of sounding robotic, and bots are increasingly capable of sounding human. Against this uncertain backdrop is where consumer doubt and the issue of eception arise. But what is eception, exactly?

The rise of ‘eception’

Coined by Byron Reese, ‘eception’ is a term for AI programs designed to deceive users into thinking that they’re interacting with a human.

Eception is a growing phenomenon in modern AI use. There’s an (often deliberate) lack of clarity for consumers when they interact with brands. Tools such as chatbots, automatic response, and new technology like Google Duplex mean that discerning the human from the machine is only growing in complexity.

Some regard this AI advancement as an impressive achievement. Others see the blurring lines between bot and human as inevitable. But is eception ethical?

It’s ambiguous at best. Eception involves the deception of consumers, but it doesn’t cause any direct harm. Indeed, you could argue that a lack of eception – i.e. failing to provide best-in-class AI support – causes more harm to the consumer relationship and brand experience. Then again, gaining from tricking or consciously confusing customers is ill-advised at best, unscrupulous at worst.

One thing is clear: the growth of eception also breeds a glut of questions and concerns.

Creating consumer doubt

Eception is particularly prevalent in marketing and ecommerce. As AI tools improve, consumer relationships are increasingly shaped by AI-powered experiences. So, does the phenomenon of eception help or hinder our marketing practices?

Consumers are aware of the AI tools that exist. As a result, they’re also more sceptical of the correspondence they receive from businesses. The mere hint of eception stands to create doubt in the consumer. We doubt that we are being answered by a human. Or that a human has even noticed that we are there.

And, if left to wonder, this consumer doubt translates into distrust. The lack of clarity that accompanies eception, then, could be doing more harm than good.

The backlash against Google Duplex serves as a stark example. Consumers don’t like the idea of being tricked. Falling prey to deception feels bad. It makes AI seem creepy and can even lead to feelings of repulsion and disquiet. This culminates in the creation of a poor experience, and a damning view of the offending brand.

Another way

Eception is the product of a lack of clarity. So, clarity is the cure. Transparency worked for Google Duplex, which now includes a disclaimer before the AI begins talking.

Transparent AI use means making it clear when a message is from a bot. It means being more open about the role AI tools play in your marketing strategies. It means making it easy to reach a human team member if a consumer wants to interact further or find out more.

That isn’t to say you must scream your AI use from the rooftops. Nor need you apologise or rationalise. Many consumers, after all, are happy to chat with a bot for quick queries or smoother self-help.

A simple ‘heads up’ message built into the bot’s content is enough. Transparent AI use in marketing also doesn’t mean a robotic tone of voice. Chatbots and other AI tools can use a friendly, human voice that’s on par with your brand, without pretending they’re human.

A clearer view

As far as ethics go, it’s undeniable that transparent AI use is far less ethically ambiguous than eception. But what would a transparent approach to AI mean for our marketing practices?

On the one hand, people don’t tend to have much faith in AI and bots. Until recent advancements, automated responses have denoted an absent and frustrating service. Telling consumers that they’re interacting with AI could, in theory, taint the perception of the brand experience.

But transparent AI use removes the doubt — and affiliated risks — from using AI as a marketing tool. Consumers are more likely to accept AI if they feel comfortable and confident about how it’s used. If AI isn’t (seemingly) trying to trick us by spotlighting as a real person, we’re less likely to feel aversion when engaging with it.

Ultimately, clarity breeds trust. And that trust can only boost your brand reputation. So, drop the eception and acknowledge use of AI. Doing so could drive better consumer relationships, alongside wider AI acceptance.

Interested in hearing leading global brands discuss subjects like this in person?

Find out more about Digital Marketing World Forum (#DMWF) Europe, London, North America, and Singapore.  

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *