Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Go to Haptik Website
  • Contact Us
  • Home
  • Bot Building
  • Essentials

A/B Testing

Written by Hitesh Singla

Updated on January 5th, 2023

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Getting Started
    Build Deploy Analyse Manage Account Bot Deactivation
  • Bot Building
    Essentials Smart Skills Steps User Messages Bot Responses Entities Connections Customisations User feedback collection Testing Whatsapp Bots NLU Bot Maintenance
  • Smart Agent Chat
    Set up Admin Settings MyChats Section (Agent Inbox) Live Traffic Section Teams Section Archives Section Analytics Plans on Smart Agent Chat
  • Conversation Design
    Design Basics Design Guides Designing for Platforms Designing WhatsApp Bots
  • Developer Guides
    Code Step Integration Static Step Integration Shopify Integration SETU Integration Exotel Integration CIBIL integration Freshdesk KMS Integration PayU Integration Zendesk Guide Integration Twilio Integration Razorpay Integration LeadSquared Integration USU(Unymira) Integration Helo(VivaConnect) Integration Salesforce KMS Integration Stripe Integration PayPal Integration CleverTap Integration Fynd Integration HubSpot Integration Magento Integration WooCommerce Integration Microsoft Dynamics 365 Integration
  • Deployment
    Web SDK WhatsApp Facebook Instagram Sunshine Conversation LINE Google Business Messages Telegram MS Teams Bot as an API iOS SDK Android SDK
  • External Agent Tool Setup
    Zendesk Chat Salesforce Service Cloud Freshchat Zoho NICE CXOne Gorgias
  • Analytics & Reporting
    Intelligent Analytics
  • Notifications
    SMS Notifications Success Measurement
  • Commerce Plus
    Catalog Integration Bot Building Guide Channel Deployments Unified ML Pipeline Documentation
  • Troubleshooting Guides
    Error Messages FAQs
  • Release Notes
+ More

Table of Contents

Overview Traffic Split between Variants Creating a New A/B Test Modifying the A/B Test Measuring the A/B Test Stopping the A/B Test Info

Overview

“A/B Testing” or “Split Testing” is a technique that allows you to split a bot’s Conversational Journey into two different variants viz `A` and `B` so that some of the users coming to your bot get to see Journey `A` and some of the users get to see Journey `B`. The goal here is to help you determine which of these conversational journey variants perform better.

Traffic Split between Variants

By default, the traffic split between the two variants is 50%-50%. What that means is, 50% of the users coming to your bot will be taken through the Variant `A` Journey whereas the rest 50% of the users will get the Variant `B` journey. You can also customize this split to be 20%-80% or 60%-40%, etc. by modifying the A/B test.

Creating a New A/B Test

  1. Log in to your bot and navigate to the Conversation Studio panel.
  2. Click on the A/B Testing button available on the screen. A slide-in window appears from the right displaying instructions to create a new A/B test.
  3. Select any one static node in your bot for A/B testing and click Start A/B Test Experiment.

Your variant for the selected static node is now created. You can now create its journey going forward.

Modifying the A/B Test

After the variant of the selected static node is created, define the traffic split by dragging the bar right or left depending upon the percentage of users you want to redirect to each of them. Click Save to save the traffic split.

Measuring the A/B Test

Since A/B testing allows you to split a Conversational Journey into 2 variants, the best way to measure which Journey is performing better is by using the `User Journey` Analytics feature. That means we have to create 2 different User Journeys in Intelligent Analytics. One User Journey for Variant `A` and the other User Journey for Variant `B`.

Stopping the A/B Test

Once the A/B Test has been live, we will have measured and determined which variant `A` or `B` is performing better.

To stop an ongoing A/B test, follow the below steps:

  1. Click on  icon. The slider window appears.
  2. Click on Stop A/B Test Experiment panel. Choose the winning variant. Once the A/B test is stopped, the losing variant will be deactivated.
  3. Click Stop A/B Test Experiment.
Delete

Info

Any changes made to an A/B test (creation, modification, or stopping the test) happens only on Staging. These changes will be reflected in Production only once the Bot is Published.

Was this article helpful?

Yes
No
Give feedback about this article

Related Articles

  • Conversation Studio FAQs

Platform

  • Conversation Studio
  • Smart Skills
  • Advanced NLU
  • Intelligent Analytics
  • Omnichannel
  • Smart Agent Chat
  • Enterprise Security
  • Integrations

Solutions

  • Conversational Commerce
  • Lead Generation
  • Customer Care
  • WhatsApp
  • Conversational IVR
  • Google Business Messages

Industries

  • Retail/ E-Commerce
  • Financial Services
  • Travel & Hospitality
  • Telecom

Knowledge

  • ROI Calculator
  • Reports & Research
  • Case Studies
  • Webinars
  • ISAT
  • Tech Blog
  • Business Blog
  • Resources
  • Haptik v/s Yellow
  • Haptik v/s Liveperson
  • Haptik v/s IBM Watson
  • Haptik v/s Verloop
  • Conversations on AI

Company

  • Why Haptik
  • About Us
  • Careers
  • News & Media
  • Awards & Recognition
  • Contact Us
  • Partnerships
  • Investor Relations

Subscribe

Sign up to recieve the latest updates

Find us on

  • Twitter-footer
  • Linkedin-footer
  • YT-footer
  • Insta-footer
  • G2-footer
  • Facebook-footer

Knowledge Base Software powered by Helpjuice

Copyright © jio Haptik Technology Limited 2021 | Data Security & Privacy Policy | GDPR

North America | Asia Pacific | Africa | enterprise@haptik.ai

Definition by Author

0
0