Toady’s new beta update to iOS 8 features a change to the way the built-in dictation system works. In previous versions of iOS, dictating text into an app would send your voice to Apple’s server once you finished speaking to be analyzed and return the converted text. Siri used to function the same way, but with iOS 8 Apple made changes that allowed voice input to be streamed to the server for conversion while the user was still speaking.
As of iOS 8 beta 4, the system keyboard’s dictation feature now works the same way. Just like in Siri, you can now see each word appear almost immediately as you speak. It allows you to catch errors more quickly as they happen and brings the various voice-powered features of iOS in-line.
Check out the video below to see it in action:
[youtube https://www.youtube.com/watch?v=UYgvqsfoovg&feature=youtu.be]
OS X got this same feature when Mavericks launched last year in a mode called “enhanced dictation” that also allowed for offline speech-to-text conversion. Unfortunately, it doesn’t seem offline mode is available on iOS just yet, though Apple is reportedly testing that feature now.
FTC: We use income earning auto affiliate links. More.
I always liked Google Search’s streaming voice recognition. Seems more accurate and intuitive. Glad to see Apple playing catch-up with this.
It’s not catch up its apple being sure it works properly for a user before just throwing it out with issues in a final build unlike android which is like running software daily.
I would love this feature to be offline like on the Mac. It’d be great for Siri to work offline & do the regularly stuff easy & quick.
I think they’ll do it some day in the future
This works surprisingly well!
I’ve noticed a huge improvement in accuracy of voice recognition with iOS 8. Is there a way to capitalize on this with transcribing audio files?