When the original developers of Siri jumped ship to develop a competing intelligent assistant called Viv, they dismissed Apple’s implementation of their intelligent assistant as just ‘a clever AI chatbot.’ We’ve since heard that Apple plans to offer a Siri SDK that will allow it to call on the capabilities of third-party apps (something I called for last year) – but it seems like the company is also seeking to go even beyond Viv’s capabilities.
Apple last year acquired British intelligent assistant developer VocalIQ – a tool specifically geared to truly conversational queries – and a source who spoke to Business Insider gave some insight into just how intelligent Siri could become when infused with this tech …
The company tested VocalIQ against Siri, Google Now and Cortana on complex natural language queries.
For example, imagine asking a computer to “Find a nearby Chinese restaurant with open parking and WiFi that’s kid-friendly.” That’d trip up most assistants, but VocalIQ could handle it. The result? VocalIQ’s success rate was over 90%, while Google Now, Siri, and Cortana were only successful about 20% of the time.
While other companies like Viv and Hound have similar capabilities, those are all ‘session-based.’ That is, while you may be able to ask a series of linked questions like ‘Who designed Waterloo Bridge?’ followed by ‘What else did he design?’, that context is forgotten as soon as you ask something new.
Where VocalIQ differs is that it remembers context permanently.
Let’s go back to the Chinese restaurant example. What if you change your mind an hour later? Simply saying something like “Find me a Mexican restaurant instead,” will bring you new results, while still taking into account the other parameters like parking and WiFi you mentioned before. Hound, Siri, and any other assistant would make you start the search session over again. But Vocal IQ remembers. That’s more human-like than anything available today.
VocalIQ also claims to be far better at parsing queries in the first place.
VocalIQ can also filter out extraneous noise to figure out exactly what you’re saying, thus making it more accurate than Siri is today. It’s able to take in all the noise in an environment — the TV, kids shouting, whatever — and determine with a high probability which sound is actually the user’s query. It can even learn to adapt to different accents over time to improve accuracy.
Apple may have had one other reason for the acquisition: VocalIQ had a particular focus on IA use in cars, saying that it considered it a failure if the user had to look at the screen at any point during a query. This contrasts strongly with Siri, which all too often responds with ‘Here’s what I found on the web.’
Apple already seems to be determined to lose the screen dependence, with rumors that it is working on an Amazon Echo-style speaker to act as a Siri interface in the home.
BI‘s source had no knowledge of Apple’s plans to integrate the tech into Siri, but speculated that we should perhaps expect a gradual transition rather than a Siri 2.0 launch. Either way, it can’t come fast enough for me.