Oh no! Your bot’s not working the way you expected it to? Follow this step by step guide to debug your bot. The debug window is broken into two segments:
- Basic Info - The Basic Info section provides a high level overview of the bot logs in an easy to read UI.
- Detailed Info - The Detailed Info section of the debug window provides a full JSON of all the log details.
The failure of the bot can be bucketed into 5 categories
Wrong Bot getting detected
If an undesired bot (domain) is detected, then open the business manager and make sure you have selected the right bot.
Example scenarios - There was a bot detected but it wasn't the active bot and the pipeline used the default bot configured on the business manager.
Another case is when no active bot was detected and also no default bot was configured on the business manager. In such a case the pipeline detects the fallback bot that was configured on business manager.
Wrong Node or No Node detected
There could be multiple things going wrong here
User says error: You should be able to solve all Intent Detection issues following the given flow-chart
Connections error: User transitioned from Node A to a start node instead of moving to connected Node B. This happens because B was not a start node
No Entity detected
When entities are not detected on a node -
For local entities with entity values - Check if the right values are populated in the entity dictionary on the detected node.
For system entities - Check if the entity is present on the detected node.
If it is not present, add it to the node.
If it is present and entity was not detected, reach out to ML support.
Wrong Entity detected
When wrong entity is detected, check if the right entity is present on the detected node.
Note: Not all entity types will be available on the basic info section. You can use the detailed info section to view data of such entities. The basic info section will provide a warning message when it detects an entity of such type.
In any of the cases mentioned above, the following information / concepts might be useful while building debugging guidelines
Lets say you have a node which has following sentences:
- Benefits of SIP
- SIP's benefits
- tell me about benefits of SIP
And while you were testing your bot, following testing guidelines, you found out that the bot is giving false positives for following sentences:
- Benefits of bvdfbv fjdb
- cricket's benefits
then what you should do is:
Add variations of your sentence with different sentence structure + different words with the sentence meaning the same, Eg:
- Tell me about the advantages of SIP
- how will i benefit with SIP
- How is SIP beneficial
- Good things about SIP
- I’ve heard SIP is good. Can you tell me how?
Add negative variations in negative response, Eg:
- Bhdcbdhvb SIP
- Benefits njvjdfvnj jnvjdfvjfv SIP SIP
We strongly advice you to prefer solving the problem by adding variations in user-says than by adding variations in negative-response. Irrespective, the number of sentences in your user says should be at least 2-3 times higher than the number of sentences in negative-response
How you see the Node list for a user’s message (clicking "Log" icon on image above), the bot builder would be able to see all nodes considered for disambiguation message and their specific metadata in “Log” view.
Example Disambiguation logs:
When you get more than 1 node(s) in the node_list, we send a disambiguate message. In this scenario, you would be able to check
More than 1 node detected for disambiguation
User says variant matched for each detected node
Individual scores for each detected node
For cases when we sent a disambiguate message, but we should not have disambiguated you should add the user’s message as a user says variant in the relevant node which ideally should have been detected.
And scenarios where we did not disambiguate, but we should have sent a disambiguation message can’t be found directly. However, as a bot builder we can look for the following to filter cases where disambiguation was needed -
User messages on which Bot break happened
Bot responses with negative user feedback
Conversations with 1 or 2 rating i.e. low end conversation feedback
Pro Tip -
Spell correction and normalisation
Training email will contain details related to spelling corrections done by our systems. To learn more, checkout spell correction section
Real time logs will contain dictionary with original words from user query and corresponding spell corrections performed by the system.
If the corrected spelling is wrong, please reach out to ML support with screenshots and details related to expected behavior.