iOS 16 brings with it a much-improved dictation experience – one that lets you seamlessly switch between typing and speaking. I’ve already found it to be a massive improvement from iOS 15, but how does it compare to Google’s Pixel 6, which was unveiled last year with a new ‘assistant voice typing’ that brought similar hands-free typing and editing features?
By default, dictation on OS X is initiated by using a double-press of the function (fn) key on your Mac’s keyboard. But did you know that it is also possible to start dictation hands-free using only your voice? In this brief tutorial, we’ll show you how. Expand Expanding Close
Nuance, the voice recognition and productivity software company behind the iOS keyboard’s Dictation feature, today has revealed a series of updates to its applications and a new cloud-based synchronization service at the core. Nuance provided us with a demonstration last week of the new iOS and Mac apps, and we came away impressed with the accuracy, speed, and capabilities of the upgraded platform.
Toady’s new beta update to iOS 8 features a change to the way the built-in dictation system works. In previous versions of iOS, dictating text into an app would send your voice to Apple’s server once you finished speaking to be analyzed and return the converted text. Siri used to function the same way, but with iOS 8 Apple made changes that allowed voice input to be streamed to the server for conversion while the user was still speaking.
As of iOS 8 beta 4, the system keyboard’s dictation feature now works the same way. Just like in Siri, you can now see each word appear almost immediately as you speak. It allows you to catch errors more quickly as they happen and brings the various voice-powered features of iOS in-line.
Back in 2012 we noted that Apple was hiring engineers to help localize Siri into a number of languages the feature does not yet support. Those included Arabic, Norwegian, Dutch, Swedish, Finnish, and Danish, and recently Apple has added job listings for three more languages: Russian, Brazilian Portuguese and Thai. Apple also posted more recent job listings for the languages it first started hiring for back in 2012.
While Apple didn’t announce any new languages for Siri coming in iOS 8 when it previewed the new operating system earlier this month, it’s always a possibility languages could be added in time for its release this fall.
Apple is yet to add support for the languages mentioned above that it started hiring for a couple years back. Currently, Apple lists the following languages and localizations as supported by Siri: Expand Expanding Close
During its unveiling of iOS 8 and OS X 10.10 Yosemite yesterday, Apple mentioned that it’s adding 24 new dictation languages, but it didn’t specify what those languages would be. Dictation, a feature available on both iOS and OS X, uses speech-to-text technology powered by Nuance to let users input text using only their voice rather than a keyboard or touchscreen.
Apple has gone from just 8 languages (with a few variations for some) to over 30 in Yosemite. In case you’re curious if your language will make the cut by the time the new operating systems are released this fall, below we’ve included a full list of new supported languages and variations by country:
Using our voice to control computers has never really taken off. For many of us, using voice recognition technology wasn’t even a consideration until features like dictation and Siri arrived on our iPhones and iPads. There’s good reason too: the voice recognition features built into our devices have always had the reputation of being half-baked. They simply aren’t accurate and consistent enough to replace our tried and trusted mouse and keyboard or touchscreen. While half decent dictation features come with every Mac (and are powered by Nuance’s technology), the voice recognition features you get with latest version of Nuance’s Dragon Dictate for Mac go well beyond simply dictating speech to text. Expand Expanding Close
Apple is testing a local, offline version of Dictation voice input for iOS devices, according to strings of code found inside of the iOS 7 beta. The code, which was discovered by Hamza Sood, is located inside of both iOS 7 betas, but it is not present in iOS 6. Currently, when an iOS user uses their voice to input text using Dictation, the iOS device will use software that uploads your speech to the cloud to be converted into text. Because this relies on an internet connection and a cloud backend, this could sometimes mean errors and long-loading times, as well as some unwanted data usage…
The iOS 5.1 beta 3 is apparently lacking new features or exciting hints at the future of iOS devices, but we have discovered something potentially major: Siri Dictation references. Our own tipster Sonny Dickson was looking through the iOS 5.1 beta 3 settings application on the iPad and discovered a new section in the keyboard menu called “About Privacy and Dictation.” When opened, as shown above, the iPad provides the user with the standard legal literature and feature information for Siri Dictation.
Dictation is not actually functional on the iPad 2 running iOS 5.1 beta 3, so perhaps this will be an iOS 5.1 launch feature for the iPad, or it may be an iPad 3-exclusive feature; a similar process to the iPhone 4S exclusively gaining Siri and Siri Dictation support in iOS 5.0. We’re also hearing this link/document is also appearing on retina iPod touches as well.
On the iPhone 4S, Apple does not have a specific menu related to “Dictation and Privacy” in the keyboard settings panel. That literature is reserved exclusively for the Siri preferences under general settings and covers both Dictation and Siri. This may weaken concerns that this new iPad Dictation menu is simply carried over code from the iPhone 4S. This also may mean that the iPad’s Siri support could be limited to Siri Dictation, but that is pure speculation. Separately, we heard months ago that Apple was internally prototyping a version of the full Siri experience for the iPad, but have not heard any new developments since.