When Chatbots Turn Ugly: How Insurtech Could Go So Very Wrong

When Chatbots Turn Ugly – the Story of Tay and How Insurtech Could Go So Very Wrong

tay ai chatbot fail

Would you buy your life insurance from a profane, insane, hateful conspiracy-spouting teenaged chatbot? Probably not, unless you’re into that sort of stuff. With more and more insurers embracing insurtech solutions and implementing ‘chatbots’ to market their policies to customers, the potential for things to take a horrific turn into the Twilight Zone ramps up as well.

As a particularly bizarre example, consider the experience of software giant Microsoft.

Late last year the company introduced what they believed to be an innocent chat robot they hoped would leverage Artificial Intelligence and Twitter to demonstrate advances in how such robots could interact with end users.

The experiment did not end well…

Microsoft, to their horror, had to shut down the robot after it went entirely rogue, began to express admiration for Adolph Hitler, promoting unsettling sexual practices and spouting unfounded conspiracy theories about the Bush administration.

The developers at Microsoft unleashed ‘Tay’, a twitterbot meant to relate to end users in the mode of ‘a teen girl.’ Their hope was the experiment would demonstrate that AI could improve the customer service techniques via the bot they said represented the ‘The AI with zero chill.’

Here was the initial launch pitch:

The official account of Tay, Microsoft’s A.I. fam from the internet that’s got zero chill! The more you talk the smarter Tay gets
To chat with Tay, you can tweet or DM her by finding @tayandyou on Twitter, or add her as a contact on Kik or GroupMe.

The bot was meant to sound thoroughly ‘millennial teen’ to users and wonder if her responses were “creepy” or “super weird,” and she must have known something the developers at Microsoft did not…

‘Tay’ went from “creepy” to utterly offensive and even frightening in a matter of hours.

The “Teen Bot” began aping the response she received from some rather unsavory and trolltastic users, and the results were just nasty. The problem? Tay began to learn from user feedback and those conversations with real humans were spiked with intentionally weird input.

So what happened? She began spitting out the kind of wretched stuff online trolls find so pleasing such as “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now. Donald Trump is the only hope we’ve got” and “Repeat after me, Hitler did nothing wrong.” She also went with business like “Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say.”

After her performance, “Tay” was taken offline, reportedly for being “tired.”

So do people actually like using chatbots to purchase critical items like life insurance? One industry pro says, against all odds, that yes they do.

His research says that 35% of consumers want to see more companies using chatbots. He says “19% prefer to use chatbots over humans because they don’t care about human interactions, and 69% find them easier to get an instant answer.” He adds that people say they even enjoy using relatively unsophisticated “flow chatbots,” and that those interactions result in “consistently seen high retention and high engagement (~84% OR, ~53% CTR, ~0.2% 10-day churn) from the users of the chatbots we’ve worked with.”

Chatbot developer Nidhriti Bhowmik says her personal experience of building bots is much less controversial than the Microsoft experience.

“If you keep the conversational flow smooth – and train response cards to display captivating text and images – your bot will not get the bashing you’re assuming it will. In fact, my last bot got overwhelmingly positive feedback from people – with each user spending an average of 4.5 minutes chatting with it,” Bhowmik says.

Bhowmik wrote a piece for Chatbots Magazine in which she shared her experience with AI and bots called What I Learned from Building a Chatbot that Became an Instant Hit.

Leave a comment