Poor old Siri doesn’t seem to get much love from many users. When I recently mentioned it as the reason I upgraded from the iPhone 4 to the 4S, quite a few disparaging comments were made about the service. Our finding this weekend that Apple now considers Siri good enough to lose the beta tag caused Gizmodo to wonder who actually uses it.

I thought its reclassification as a fully-fledged iOS feature would be a good time to persuade those who’ve abandoned the assistant to give Siri another chance … 

Let’s start with the obvious: when Siri  first launched, it wasn’t very good. It certainly didn’t get anywhere close to living up to the promises made in the ads. It would fairly often fail to understand us. Sometimes it would just sit there displaying that annoying ‘three purple dots’ animation. When it did understand us, it would often fail to answer our question.


I was a big Dragon Dictate user at the time, and my experience with Dragon probably led to me having more patience with Siri than most. You see, Dragon also wasn’t very good out of the box. But I persisted. I did more training exercises with it, and I painstakingly used voice commands to correct its mistakes (the only way it learns). The first couple of weeks were very hard work, and I came close to giving up.

But I didn’t, and after a month, my Dragon experience had been transformed. Accuracy was about 98 percent. I was easily dictating all my emails, chat messages and much of my work, with just the odd manual correction here and there. I was saving masses of time.

So when my early experiences with Siri were also a little frustrating, I stuck with it. And just like Dragon, the more I used it, the better it got. Within a month, its accuracy was, if not quite at Dragon levels, extremely high. I now rarely use the keyboard on my iPhone, dictating all my texts, emails, calendar appointments, reminders and notes.


Many people didn’t get that far with it. If you’re one of them, I think it’s helpful to view Siri as two separate components. First, dictation. To allow you to dictate things like texts and emails, all Siri has to do is convert the speech into text. It doesn’t actually need to understand the content, it only has to recognise the words.

Second, understanding the content of those words. That’s much tougher. Couple mistakes in transcribing the speech with the limited level of artificial intelligence built into Siri, and it’s not surprising that many started tearing out their hair and writing it off as a lost cause.

My suggestion is to view Siri primarily as a dictation device. Instead of asking it to do stuff, ask it to type stuff. Stick with it for a couple of weeks (Apple’s Siri servers keep individual voice files for each user to improve recognition over time), and my experience is that you’ll then have a very capable transcription device.

Image: technomag.com

Image: technomag.com

Which brings us to the question and command side of things. The number of phrases Siri understood at launch was extremely limited. Partly because it had limited functionality, and partly because the Siri engineers didn’t know all of the different wording people might use to make the same request. Siri might have been expecting “What are my appointments this afternoon?” while you ask “What have I got on this afternoon?” and I ask “What am I up to later today?”.

But that’s been another massive part of Siri’s learning experience. As of next month, Siri will have had two years’ experience of what tens of millions of people ask and how they ask it. All of that experience has been used to teach Siri to understand a much broader range of phrasings, so something it didn’t understand two years ago, it may well understand now. Once you’ve given Siri a fortnight to learn your voice, give it a try with some commands: you might be pleasantly surprised.

Photo: londonminiguide.com

Photo: londonminiguide.com

None of which is to say that Siri is perfect. One obvious weakness is that everything is online: Siri has to digitise your voice, send that data to an Apple server and then wait for the decoded text to be sent back. That adds delay, and means you can’t use it when you don’t have a net connection (like on London’s tube network).

That may be changing soon. The A5 chip in the iPhone 4S likely didn’t have the processing power needed for local voice-recognition. It’s not certain that the latest A7 chip in the iPhone 5s does either, but the code needed for offline Siri use is sitting, currently inactive, in iOS 7. It may well be a feature that Apple plans to switch on soon. Once it does, the days of waiting for a server response should be at an end.

Image: n3rdabl3.co.uk

Image: n3rdabl3.co.uk

That still leaves one thing I find enormously frustrating about Siri, and that’s the lack of support for third-party apps. My iPhone knows where I am, and knows where I live. I have a trains app on it that knows the time of my next train home. But two years on, I still can’t ask Siri “What time is my next train home, and what platform does it go from?” because Siri can’t interrogate the app. That, to me, is pretty ridiculous.

Apple does need to open up Siri to third-party apps, and I’m sure it will: it’s a question of when rather than if. Once it does, and you can ask your iPhone any question that can be answered by any of your apps, I’m sure than even the most dismissive will decide to give Siri that second chance.