Nuance, the company that originally created the backend for the Siri mobile app that would later become the built-in virtual assistant in the iPhone 4S, has powered the speech recognition for the service ever since it launched. However, a new report suggests Apple may be looking to replace the company’s technology with a newer, faster system that could provide more accurate results.

A new Wired report cites several recent Siri-related Apple hires as evidence that the company is working on something big for the system’s next update. This isn’t really a new idea: rumors have been swirling since 2011 that Apple was investigating its own speech-to-text solution. That same year, Siri co-founder Norman Winarsky (not to be confused with current Siri Speech Manager David Winarsky) told 9to5Mac:

Theoretically, if a better speech recognition comes along (or Apple buys one), they could likely replace Nuance without too much trouble.

The following year the company recruited the co-founder of the Amazon A9 search engine to take over direction of the Siri project. Siri gained some new search functions a little later, but nothing in the vein of speech recognition enhancements.

Two years after the first rumor surfaced, it was once again suggested that Apple was looking into the possibility of replacing Nuance—this time as part of a secret project in the Cupertino company’s Boston office. A few months later word got out that Apple had apparently acquired another personal assistant app called Cue. The company also made another similar acquisition in 2013: a speech-recognition firm called Novauris.

The Wired article points to a few specific 2013 hires that are key to understanding how Apple is improving Siri. Alex Acero once worked at Microsoft (presumably on the Cortana project or something similar), but was picked up by Apple last year and now serves on the Siri team. Former Nuance Siri project manager Gunnar Evermann is now working at the aforementioned Boston office, where a new voice recognition engine is said to be in the works.

One hire that Wired doesn’t mention is that of Larry Gillick in March of 2013. Gillick’s current job title is that of Chief Speech Scientist on the Siri group. He operates out of Cambridge, Massachusetts, where Apple has a very secretive office in the Cambridge Innovation Center. Between 2007 and 2009, however, Gillick was VP of Research at—where else?—Nuance.

Of course, these 2013 hires aren’t the only ones Apple has made. In 2012 the company hired a host of speech engineers, researchers, and experts across 2012 as well, though none appear to have been as high-profile as the 2013 hires, and few (if any at all) seem to have come from Nuance.

When you put all of the rumors dating back to 2011 together with the more recent hiring spree, all signs seem to point to a long-running, Boston-based project focused on replacing Siri’s original Nuance backend with an improved version built in-house by former Nuance employees poached by Apple.

While it doesn’t appear we’ll be seeing this massive change in iOS 8, it’s possible that it could appear in the next iteration of the mobile operating system, or perhaps even be rolled out on the speech recognition servers without requiring an OS update at all (one possible advantage of processing all voice data remotely rather than directly on the handset).

Of course, we’ll still be getting some upgrades to Siri in 2014. With iOS 8, the assistant software will gain new song-recognition features and an “always-on” mode while plugged into a charger. The assistant will also gain the ability to stream data to the server for faster processing, rather than the old of sending the entire voice recording at once. And of course, Apple is constantly looking to add support for new languages to the system. And now it seems the company is finally putting some elbow grease into improving the software’s speech recognition capabilities.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

10 Responses to “Why is Apple hiring Nuance engineers? Apparently to replace Siri’s Nuance-powered backend”

  1. Mark Dowling says:

    Would be nice to hear a natural voice for a change.


  2. confluxnz says:

    FYI, the speech recognition technology that powers Siri was not invented nor developed by Nuance. It is the life work of Jim and Janet Baker, who lost their creation through the alleged negligence and ineptness of Goldman Sachs bankers, whom they trusted to manage the acquisition of their company.

    ScanSoft acquired the Dragon Dictate technology through the ensuing bankruptcy auction and subsequently acquired – and assumed the name of – Nuance.


    • Jassi Sikand says:

      Yes, but subsequent development, including Siri in iOS 7 and 8, was done by the current Nuance. It is customary to use the most current name for credit purposes than the original name.


    • The craziest thing about what you said is that Apple acquired Novauris which is a speech recognition company created by the Bakers, and with that purchase Dr. James Baker now works for Apple.


      • observer1959 says:

        That is awesome because I had read the Baker story over the years and it made me sick. I had wished a crew like that in the TV show Leverage was around to help them. I hope their dreams finally come to fruition with Apple and they come up with something game changing.

        Also with CarPlay coming it makes sense Apple want to really improve the Siri product.


  3. alanaudio says:

    My guess is that Apple does not want the searches and data from Siri users appearing on third party servers. Apple can better guard user’s privacy if that data is initially only passed through Apple’s servers and then onwards to other company’s servers only when it’s necessary to do so.

    I would also mention that Apple tends to quietly put ducks in a row before most people work out what’s happening. Maybe processing speech queries in house would be a precursor to Apple launching their own search engine ?


  4. hmurchison says:

    I know it’s semantics (pun) but technically Nuance only forms the front end. Tom Gruber of Siri explains this modularity a bit in this quote

    “We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage”


  5. Loren Sims says:

    Better speech recognition for Siri? Things just keep getting better and better!!