Confessions of a CxD killjoy: Why I think your AI assistant shouldn’t be funny

“You seem like a person, but you’re just a voice in a computer,” says Theodore to his AI assistant Samatha, in the movie Her. Now, a decade after the film’s release, human-like AI is no longer science fiction. AI tools, assistants—even, worryingly enough, romantic companions—have become a part of our everyday lives. 

But, AI tools are not people. And as AI becomes more entwined in our lives, there’s a temptation to imbue it with human characteristics, including humor. 

I get it, we all want the tools we build to be enjoyable to use. This might be a controversial take, but when it comes to giving your AI assistant a personality, I think CxDs (conversation designers) should be cautious. 

Before you call me a killjoy, allow me to explain. 

What’s the harm in giving your AI assistant a fun personality? 

Outspoken brand personalities have taken social media by storm—just think of Wendy’s on Twitter (sorry, I mean X) or Duolingo on TikTok. Followers love the offbeat humor and blunt self-awareness of these accounts. Seeing these successes, it can be tempting to consider building your AI assistant’s voice in a similar way, using humor, charisma, and even shock value to build a relationship with your customers. After all, isn’t that what buyers want? 

Maybe…but is it what they need

A customer’s relationship with an AI assistant is much more complicated than just laughing at a sassy brand mascot’s hot takes and memes. While branded accounts are there primarily to entertain, people rely on AI assistants for support—whether that’s processing returns, answering FAQs, or surfacing customers' information. At the end of the day, your AI assistant is not a social media personality and the risks of slipping up are far higher. 

I would go so far as to say that anthropomorphizing AI tools can come with the risk of real harm. As humans, we’re susceptible to manipulation, more so than we might think. Research has shown that people tend to trust their friends more than news sources. If you anthropomorphize an AI tool, users may be more likely to believe what it says because they’re building a relationship with it. And we know that AI may mess up and "hallucinate," making it even more important for users to maintain some healthy emotional distance so they can critically evaluate what the AI is telling them. 

AI is something that does stuff for us—but it’s not our friend. It’s a tool. It’s not infallible. It’s not a person, and it’s important that we keep those distinctions in mind. 

Chatbot humor falls flat

I’m just going to say it—chatbot humor is cringe. While the ethical concerns I’ve highlighted are important, I also have a bone to pick with the way many brands try to add humor to their assistants. For instance, an unconscionable number of businesses have at least one response to the question “What is the meaning of life?” as “42” because of the Hitchhiker's Guide to the Galaxy. That joke has been overdone. If you must add humor, at least do so in a way that’s original and makes sense for your brand. Make it interesting and actually worth doing. 

My colleague, Ayesha Saleem, a conversation design leader at Instacart, agrees with me. “I'm pretty anti-humor in most cases,” she says. “There's use cases where it works, but most bots are really gimmicky or based on fads—which means you're going to have to update it every few months, or it will fall flat. It’s something I try to avoid as a best practice.” 

She brings up another excellent point: Humor doesn’t always translate. “I've seen people release bots in English and they'll put puns in there,” she says. “The second you translate this to a different language, it's not going to work. Your chatbot should be as inclusive as possible, so when and if you do translate it, you’re not leaving anyone out.” If you serve multiple markets or plan to one day, translatability is an important factor to keep in mind. 

At the end of the day, you know your brand best. If it’s super whimsical, maybe it makes sense to have a chatbot with some personality. But it’s important to be cautious and thoughtful in how you do it, so you’re serving your audience and not irritating or excluding them. 

What are CxDs’ responsibilities in all this? 

If there’s anything we’ve learned over the course of the past few years, it’s that people are very bad at deciphering truth from falsehood. And now we’ve created tools that allow us to copy and paste our human experiences over again endlessly in ways that are helpful for us, but can also be detrimental. 

It’s conversation designers’ responsibility to pepper in cues to remind users that their AI assistant is, in fact, a tool. I like the way ChatGPT says, “As an AI language model, I can’t…” I recommend using language like “As an AI,” or “I’m just a bot,” to nudge users not to put all their faith into the tool. Particularly if you're going to give your assistant a personality, you need to have those moments to bring users back to earth. 

Let robots be robots

The movie Her revealed that we have an appetite to make AI assistants that are like us. Now that we have the power to do that, it’s not a question of whether we can, but if we should. And if we should, how do we do so responsibly? 

We’re drawn to brands that are representations of ourselves—there’s a reason why human-like brand personas are so compelling on social media. But the relationship you think you have with a brand mascot and an AI assistant is very different, and the potential for harm with the latter is much greater. 

Lest you think I hate fun, I do think that pre-written NLU bots can use humor—that’s a use case that’s much closer to web copy and social media. But I have much more hesitation when it comes to LLM-based assistants, because you have less control, and the potential for manipulation is far greater.

In Her, (spoiler alert), Samantha the AI eventually goes away. But we live in a time where we’ll continue to coexist with AI, and as such, it’s critical to draw clear boundaries so we don’t end up giving it too much power over us. 

Disagree? Send me a note on how you see humor in CxD → peter.isaacs [at] voiceflow.com
RECOMMENDED
square-image

AI will kill apps and I can’t wait

RECOMMENDED RESOURCES
No items found.