Apple is testing a local, offline version of Dictation voice input for iOS devices, according to strings of code found inside of the iOS 7 beta. The code, which was discovered by Hamza Sood, is located inside of both iOS 7 betas, but it is not present in iOS 6. Currently, when an iOS user uses their voice to input text using Dictation, the iOS device will use software that uploads your speech to the cloud to be converted into text. Because this relies on an internet connection and a cloud backend, this could sometimes mean errors and long-loading times, as well as some unwanted data usage…
Popular third-party Android browser Dolphin made its way to the App Store in August of last year bringing highly customizable gestures, built-in translations, and a dock-style sidebar for quick access to tabs, bookmarks, and speed dial. Today, developers MoboTap Inc. pushed out an update to the iPhone app that, among other new features, introduced voice search functionality called “Dolphin Sonar.”
Dolphin Sonar is super easy to use and can do almost anything. Instead of typing, tap the microphone at the bottom left or just shake your phone (because who doesn’t want an excuse to do that!). Then say what you want to say and Dolphin will do the rest… use your voice to search the Web, find exactly what you’re looking for on sites like Facebook or eBay, bookmark your favorite website, and (like a real Dolphin!) use Sonar to navigate. Ask Dolphin to search on Facebook or create a new tab…all without having to type a single letter.
Other additions to the update include the return of the URL keyboard “by popular demand,” new search engine options to switch between four default settings, three font size options for browsing, and the ability to dim the screen with a single tap using “Night Mode.” Like the default Safari browser, Dolphin will also now save images directly to your iPhone’s photo album. You will also get the usual stability and performance enhancements when you grab version 4.0 of the Dolphin browser on the App Store (iTunes link).
Nuance, the speech recognition company currently powering Apple’s Siri in the iPhone 4S, announced (via TechCrunch) it would be dropping a new voice-controlled TV platform known as “Dragon TV.” Apple is —of course— expected to include Siri-like voice capabilities in the rumored Apple branded HDTV, but Dragon TV has beat them to it with a platform that will enable users to find “content by speaking channel numbers, station names, show and movie names.”
Nuance Communications Inc. (NASDAQ: NUAN) today unveiled Dragon TV, a unique voice and natural language understanding platform for TV, device and set-top box OEMs and service operators. Dragon TV makes finding and accessing shows, movies and content in today’s digital living room easy and fun for consumers.
Nuance provided a few examples of what type of voice-control commands might work on the platform, such as “Go to PBS” or “Find comedies with Vince Vaughn,” but a user’s commands could include “virtually anything.” The company also announced the platform will include social and messaging features, such as email, Twitter, messaging, Skype, and Facebook. Those features will also be voice-controlled allowing a user to use voice-commands, such as “Send message to Julie: ‘Old School is on TBS again this weekend – super excited’”.
According to the press release, the Dragon TV platform is already available to television and device OEMs with support for “all major TV, set-top box, remote control and application platforms.” As for specific platforms, the press release mentions Linux, Android, and iOS. There is —of course— a possibility that the technology used in the Dragon TV platform will land in a version of Siri for an Apple TV device.
Senior Vice President and General Manager at Nuance Mobile Mike Thompson said this regarding the announcement:
Update Sep 27 – Apple has sent “Let’s Talk, iPhone” ;) invites to the event.
It’s time to show our cards.
If you crack open the casing of the new iPhone, you will find significant upgrades from the iPhone 4. The new iPhone features Apple’s dual-core A5 processor like the iPad 2 for even faster performance, better gaming, and drastically improved graphics. Apple didn’t stop there though. Unlike the iPad 2, the new iPhone packs 1GB of RAM, according to a source familiar with the SOC’s manufacturing. That not only means better web browsing, but more importantly, new background tasks that Apple will introduce in the new iPhone’s software will perform much better.
The new iPhone will also feature an upgraded camera system. In terms of hardware, the new camera is an 8 megapixel sensor that takes incredibly high-resolution and clear shots, even in low light conditions because it has a backlit sensor. Also, panorama photography references have been found in the iOS SDK on multiple occasions which means we’ll likely see that feature. Other than that, the camera front-end system is reportedly mostly the same.
The new iPhone also contains Qualcomm Gobi Baseband chips that allow it to operate on both GSM and CDMA networks. We can’t yet confirm or deny the rumors that Apple was building a virtual SIM card system or if it has an NFC chip yet, however.
Although some may be happy with the new iPhone’s substantial internal hardware boosts, the new device’s biggest selling point is actually a software feature called Assistant. As we first revealed, Assistant is Apple’s Siri-inspired, system-wide voice navigation system. It so far appears that iPhone 4 and iPhone 3GS users will be left out in the fun, unfortunately, because the feature requires the A5 CPU and additional RAM.
Everything you could possibly want to know about Assistant is after the break…
Apple’s purchase of Siri in early 2010 and their partnership with Nuance in 2011 has many hoping that Apple has something like speech-to-text or voice-navigation up its sleeve for iOS 5. One of the remaining advantages of Android over iOS is its system-wide Voice Actions technology.
Unfortunately, WWDC and the iOS 5 announcement came and went and nothing related to voice-navigation had been announced. Even so, the Apple-Nuance partnership has been confirmed by way of Nuance voices in Apple’s OS X Lion and Nuance speech-to-text functionality that is referenced in Apple’s internal settings modules.
But that doesn’t mean Apple isn’t hard at work at this very moment trying to cram some native OS level voice recognition technology into iOS 5 before launch.
Coupled with Nuance speech-to-text, Apple appears to be planning to take the fruit of their Siri purchase and fully integrate it into this fall’s release of iOS 5. Because these new features have yet to appear in iOS 5 on the iPhone 4 or iPhone 3GS, Apple might be saving these new features as an iPhone 5 exclusive. This would be akin to Apple’s decision to make Voice Control and video recording exclusive features to the iPhone 3GS, even though they could technically function on earlier models of the iPhone. As you can see in Siri’s promotional video above, the company advertises itself as “your virtual personal assistant.”
As you can see in the screenshot above from an Apple iPhone test unit, Apple is currently developing and testing a new iOS feature called “Assistant.” This screenshot, from a reliable source, is corroborated with our own SDK findings (below). The source did warn, though, that development is not yet completed – and just went into testing – and may not even be finished by the time the next iPhone ships.
More info after the break…