Chatbot glow-up: 3 reasons everyone hates your AI agent (and what you can do about it)

It’s not a controversial opinion that most chatbots are bad. Every time I tell someone what I do for a living, I receive a cacophony of veiled insults—”I hate bots” “OMG, you do that?” “I never use chatbots, they’re so annoying.”  

We get it, people hate the bots we make. I only cry about it quarterly now. 

But customers have been having mediocre conversations with chatbots for years and those poor experiences have left a mark. Traditional bots were technical projects by overworked developers. Those bots weren’t designed so much as they were thrown together with a lack of customer data and a graveyard of FAQs. As a result, they didn’t see a lot of return on investment (ROI), brought back abysmal customer satisfaction (CSAT) scores, and were eventually left to deteriorate as a result. Sound familiar? 

The thing is, it doesn’t have to be this way anymore. So, let’s talk candidly. Here are the three reasons why your chatbot might suck and what you can do about it. 

Your agent might suffer from poor user experience (UX) design (or you’re stuck in a hard-coded, single-interface nightmare) 

Bots have traditionally been seen as a technical challenge rather than a design one. The developers who build them don’t often have an overall design structure or purpose—just a mandate from their higher-ups, some requests from stakeholders, and a list of things that other teams think the bot should do. 

That method is a recipe for disaster. You're not collaborating across teams to understand your customer needs, collecting user data to personalize agent responses, or designing multiple chat interfaces across channels based on the best of your visual branding. 

Instead, most companies settle for a cookie-cutter AI chatbot that’s simple to deploy, but can’t handle much more than the simplest of requests (and still sometimes fumbles them) before directing users to a live agent. As you can probably guess, I have a lot of feelings about the massive missteps conversational AI (CAI) teams make when it comes to UX design.

For instance, Sephora’s agent sits across their mobile app and website, but its design is so clunky across both that it's consistently a pain to use. 

The fix: Solve your agent’s massive UX problem 

Start with addressing the underlying problem—the lack of UX support for teams designing AI agents. Now, most CAI teams are small and scrappy, so you’re unlikely to be able to afford a full-fledged UX designer (though, I wish we all could). Instead, make the case to borrow their time from another team. 

Involve a UX designer at pivotal points in the process—from planning and strategy on what the agent will do, to how the bot should be launched across interfaces, and while you’re collecting user feedback. Folks in UX can help answer questions like: Why do we assume users want a form/FAQ/live agent right here? How are users actually interacting with us today? Is this a problem that needs solving? How can we make this experience smoother? 

Further, don’t trap your agent on the bottom corner of your homepage. You can embed AI agents across the business and its custom interfaces, including in-app, IVR, webchat, Discord, and your help center. Boxing your AI agent into a webchat wrapper for every interaction might not be the best idea. Customers expect support across their journey, and they don’t want to have to leave their current flow to get it. Instead, bring it to them.

Your agent doesn’t use existing internal and user data to make the conversation personalized (or your bot is where your FAQs go to die) 

Companies are more concerned about the risk of their AI agent going rogue than they are about the potential of it performing well. Now, there are privacy concerns that are worth considering when integrating your customer data with a LLM. That’s why at Voiceflow we ensure that the data that passes through the LLMs is never used to retrain the AI model. 

The problem is this fear keeps your AI agents frozen in time and unable to evolve as they should.  You end up using bloated dev-only platforms that aren’t accessible to your full team and are difficult to integrate across your data stack. As a result, tailored responses for each unique user are nearly impossible unless they are meticulously planned by the conversation designer. When you build traditional turn-by-turn agents without personalizing the experiences for each user, it’s no wonder bots get a bad rap. 

Not to mention, who has the time to write every single potential flow from start to finish anymore? Gone are those days, and I’m glad to be rid of them. 

Even Fortune 500 companies can make these mistakes. For Wells Fargo, providing customers with correct information is paramount—so hallucinations are a big concern. However, by using a knowledge base along with some clever prompting, we can get an LLM to act as teacher and student, self-checking its own work before it goes out to the customer.

The fix: Connect your AI agent to your knowledge base and an LLM to ensure flexible, contextual conversations

This degree of personalization does more than elevate customer satisfaction—it nurtures loyalty and transforms your customer support. Imagine how impactful your AI agent could be if you could create a deeply personal shopping experience, for example. Your AI agent might suggest products to your user based on past purchases, browsing history, and even current viewing trends. It could even offer care tips or upsell with recommended products.

Similarly, you could use your AI assistant to pull up a customer's product usage or to provide personalized troubleshooting instructions based on their past issues logged in your CRM or product database. It can route the query to specialized support paths based on the product type or issue severity, using an internal knowledge base to guide the customer through a resolution.

This type of AI-powered conversational magic isn't out of reach for you, no matter what stage of AI maturity you're in. You can even load your entire website into a knowledge base, creating a central repository of help articles, PDF documents, and .txt files—we even have a helpful guide to help you do just that. With an AI-powered agent—trained on your brand’s content—you’ll be able to generate responses that feel like an extension of your human customer support team.

Your agent can only complete limited actions (and it foists complex tasks over to a live agent)

Most users have zero patience for a mediocre agent. A live agent could make a mistake or there could be a typo in a help article, and customers are more forgiving of those snafus than they are of a chatbot that promptly misunderstands a request before responding with something entirely unrelated. It’s borderline unforgivable. 

Take Nike’s AI agent, for example. It’s not entirely useless—it can complete tasks like processing a refund or tracking an order. But it often fails on really easy things, like understanding what the user is saying. Because it fails at the easy stuff, it makes users lose confidence that it can do anything.

The fix: build an AI agent to handle complex customer interactions and complete helpful tasks. 

Traditional agents once required you to deterministically define everything that you wanted your bot to say and do. This made complex tasks hard to design and agents generally inflexible to intents that weren’t accounted for by the CAI team.

Today, all it takes is an LLM to search for and generate a semantic match to a user’s natural language intent. With the foundational technology of an LLM at your disposal—and API integrations with your ecommerce site, customer support channel, or customer data source—your AI agent can handle multi-faceted requests such as merging accounts, adjusting service levels, or applying nuanced billing adjustments. You can even equip your agent to make informed decisions on behalf of the business, such as approving discounts or customizing offers based on predefined criteria and customer data.

For example, the team at Trilogy managed to automate support tasks across 90 support lines for 24/7 support coverage. It’s an impressive feat that it took a dedicated team of two people—yes, only two—to design a core AI agent integrated with Trilogy’s help center interface, LLMs, Knowledge Base API, and a set of user support flows and functions. In under 12 weeks, the team achieved the staggering result of having 60% of central support tickets completely resolved by AI. If they can do it, so can you. 

These complex tasks can help boost your resolution rate and CSAT scores while slashing your resolution time. The key is to deploy LLM and NLU models thoughtfully at various parts of the interaction to facilitate specific tasks—not just implementing a generalized LLM model. 

My advice? Explore how you can continue to use the power of LLMs to handle complex flows from end to end. Working with LLMs is just more fun. Learning how to tame them and build frameworks for your prompt chains to work within is so much more satisfying than designing turn-by-turn conversations. Sometimes you look at the results and just think wow, that's magic. You can learn more about building with LLMs from the Making Bots series or how to prompt LLMs from Learn Prompting. Most importantly, you can learn by playing.

Your AI agent doesn't have to suck

Remember, if your agent is struggling to win over hearts, it's time for a revamp.

First, ditch the UX design that's as appealing as a dial-up tone and get some cross-team collaboration going. Spruce up that bot with a bit of personalization by hooking it up to your user data and knowledge base because nobody likes one-size-fits-all conversations. And finally, use LLMs as your partner in automating complex tasks—trust me, the reward is worth the risk. 

With these improvements, your AI agent can evolve from a source of frustration to a valuable asset that impresses users and streamlines support. Your AI agent has the potential to be something special—or at the very least, not the thing people love to hate.


Crawl, walk, run: 28+ tactics for evolving your AI agent

No items found.