The Haptik iOS SDK also supports voice as a medium to interact/chat with the bot/agent. If the appropriate permissions are given by the user, the user can interact with the bot using natural speech.
Add the speech spec along with
HaptikLib in your podfile to make use of voice capabilities.
use_frameworks! target YourTargetName do pod 'HaptikLib' pod 'HaptikLib/Speech' end
Make sure you add the following permission attributes in your
Privacy - Microphone Usage Description to enable a user to record their speech to process..
<key>NSMicrophoneUsageDescription</key> <string>Your microphone will be used to record your speech when you press the "Mic" button.</string>
Privacy - Speech Recognition Usage Description to determine the words spoken by the user.
<key>NSSpeechRecognitionUsageDescription</key> <string>Speech recognition will be used to determine which words you speak into this device's microphone.</string>
After adding the required
Speech spec, you just need set turn on a BOOL value for the haptik conversation to make use of voice capabilities. The conversation screen will start showing up a mic button within the composer bar.
See the example below:
[HPConfiguration shared].useVoice = YES;
HPConfiguration.shared().useVoice = false;
- If the required pod subspec is not added then tapping on the mic button will fail silently.
- If the required permissions are not added, then tapping on the mic button will result in application assertion.