Voice SDK’s dictation feature enables your app to transcribe speech to text in real time. Unlike voice commands, dictation does not process the resultant text with natural-language understanding (NLU) and is designed as a text input modality rather than a command interface. Although you could parse the output with regular expressions, the text is formatted for human readability rather than programmatic processing. For voice command recognition, use Voice SDK’s AppVoiceExperience actor, which provides more accurate results through NLU.
Getting Started
To use dictation in your app, add an AppVoiceExperience actor to your map that connects to Wit.ai or platform services. This is similar to adding a voice command, except that you also need a WitDictationExperience actor.
Adding Dictation to your Map
In Wit.ai, create an app to use for dictation. This can be the same one you used for voice commands or a dedicated app specific to dictation. You do not need to train any utterances for this app.
Go to Blueprint > New Empty Blueprint Class... > All Classes > Search for AppVoiceExperience. SelectAppVoiceExperience > use BP_AppVoiceExperience for the name.
Drag BP_AppVoiceExperience onto the map.
In World Outliner, select BP_AppVoiceExperience and go to Details > Voice > Configuration. Set it to the Wit configuration file you created earlier.
Go to Blueprint > New Empty Blueprint Class... > All Classes > Search for WitDictationExperience > SelectWitDictationExperience > use BP_DictationExperience for the name.
Drag BP_DictationExperience onto the map.
Add event handling to BP_DictationExperience as needed.
Starting Dictation
To start dictation, call the ActivateDictation method from BP_DictationExperience.
Stopping Dictation
To stop dictation, call the DeactivateDictation method from BP_DictationExperience.