4 Reasons why your chatbot isn't performing and quick fixes to improve it right away
You were excited. You were going to knock it out of the park by building and deploying a chatbot. You saw the KPIs and thought to yourself: this is going to be easy! And it was. It isn't very hard to build and deploy a chatbot.
There are lots of technologies out there that make it simple. Many conversational platforms list simplicity as their unique selling point. They tend to say things like that even non-engineers can build a chatbot or voice assistant with their platform.
And it’s true. But then, why is your chatbot or voice assistant not delivering and performing the way you want it to? Why was it simple to build but does it not get people to engage with it and deliver value? Why is it that your AI Assistant often simply doesn’t understand what you’re saying and is quite rude to you in its replies? Why does it look like you might have over-invested in this technology and are now afraid to tell your manager that you are in over your head?
Well, there are a few reasons why your chatbot isn’t performing. Let’s go over them and see what we can do about it. Trust me, there are lots of quick fixes that can help you unlock the value of AI Assistants.
Time to buckle up.
1. You are not guiding your user
You are not explaining to your users how they can get value from you. Instead of saying
Hi, I’m the insurance bot. You can ask me questions about coverage.
You just say
Hi, ask me anything.
That doesn’t work. You are giving your users too much freedom and now they will ask whatever they want. This makes it much harder for you to recognize the intent and answer the question properly. By not guiding the user from the start, you are creating frustration and dropouts.
Quick fix: When your AI Assistant introduces itself, make sure it tells the user how to get value from it. Explain what it can and cannot do.
2. You are not training your model properly
To understand what a user is saying, you will have to train your language model. This means that you should feed it utterances, based on which it can continue to train itself in recognizing the intent. Just coming up with phrases often doesn’t cut it. You simply cannot guess what your users are going to say.
The best way to collect some unique utterances is by doing a Wizard of Oz test. During this test you read out the prompts as you have written them, but your test users can speak freely. Whatever your users say during the test can be used as training phrases. Now you are collecting utterances that are truly going to improve your intent recognition rate.
Quick fix: Do a Wizard of Oz test to collect real utterances you can use to train the model. Once the AI Assistant is live, analyze the interactions to learn how people talk to it. Then use those phrases to train the model as well.
3. There is a lack of empathy
There is one simple truth in conversation design. Empathy leads to higher completion rates. By giving users the feeling that you truly understand their situation, you are creating empathy. This gives them great confidence that you are going to help them and get them to the right solution. When people feel understood, they will follow along.
However, when people don’t feel understood they become frustrated. And frustrated people communicate more. They talk and type as much as they can until they feel like somebody gets what they are after. But a chatbot will just fail when a user starts to overshare. This can be a big reason why your chatbot is not delivering.
Quick fix: Sample dialogue helps you design for empathy. If you are already live, then there are a few tweaks you can make. Make sure each sentence follows this pattern: acknowledgment, confirmation, prompt. The more explicit the confirmation, the more empathy it has.
4. Too simple error handling techniques
Many AI Assistants use very simple error handling techniques. They say:
Sorry I didn’t get it. Can you try again?
And when it doesn’t understand it the second time, it will simply hand it over to an agent. But you want to use the principle of escalating detail when you design your repair messages. There should be 3 different error messages for a no match.
- First try: ‘Hmm I didn’t get that. What’s your phone number?’
- Second try: ‘I still didn’t catch your phone number. It contains 10 digits. What is it?’
- Third try: ‘Something seems off. Let’s try one more time. I’m going to need your phone number to lock in the order. It contains 10 digits. What’s your phone number?’
Escalating details means that we are adding more information after each try to help the user. You want to guide him through the process to help him be successful. By investing a bit more time into error handling, you are increasing the odds of your user completing the dialogue. This creates a high completion rate and a much lower deflection rate to live channels.
Quick fix: Rewrite your generic repair messages and make them custom for each prompt. This makes your AI Assistant come to life immediately and will provide more natural interactions. It increases your completion rate from the start.
Final thoughts on why your chatbot isn't performing
There you have it. You just learned 4 different reasons why your chatbot probably isn’t performing well. The good news is that these things are fairly simple to fix.
That probably doesn’t solve the whole puzzle, but it’s enough to give you some quick results. If you want to step it up, you might want to think about implementing a workflow that has proven itself around the world of both chatbots and voice assistants. You can download our whitepaper with 10 important steps to build a great chatbot here for free.