News that Apple has developed its own generative AI model – dubbed Apple GPT – caused a little flurry of excitement yesterday, with John Gruber noting that it caused AAPL’s stock price to briefly spike by 2.7%
Even investors seemed to quickly realise that the news doesn’t mean much, however, as most of that gain was lost by the end of the day …
Apple GPT isn’t really news
Apple has long seen artificial intelligence as a hugely important tech. Indeed, every iPhone has contained a Neural Engine – a dedicated AI chip – since the iPhone X all the way back in 2017.
So the revelation that Apple has developed its own Large Language Model (LLM) isn’t remotely surprising. Indeed, I agree with Gruber that it would more surprising were this not the case.
It won’t be a customer-facing tool anytime soon
Earlier this year, when ChatGPT was shiny and new, I predicted that we can expect Apple to take advantage of this type of AI tech – but not anytime soon.
On the question of when and how Apple will use this type of tech to make Siri smarter, it’s clear that is must do so at some point, else it really will get left behind. It is, however, equally clear that it is unlikely to do so quickly, for the reasons I’ve given.
I offered two reasons for this. One, just the fact that Apple never rushes to market with any new tech – it always watches and waits for a time, while it figures out how to do things better rather than faster.
Two, generative AI is way, way dumber than it appears. That’s a problem for any public-facing LLM, but a far bigger one if your goal is to have that model talk to you.
If there’s one thing more dangerous than not knowing something important, it’s asking for the information and being given the wrong answer with great confidence […]
Siri is designed to provide spoken answers to verbal questions. Even more annoying than Siri ‘answering’ a question with “Here’s what I found on the web,” would be “Here’s a lengthy answer which you first have to listen to, then I’ll note that it may not be correct, and recommend that you search the web.”
Yep, Siri is still disappointingly dumb
Siri’s comparative lack of intelligence is a complaint dating back many years. I discussed way back in 2015 where I’d like Siri to get to, suggesting this type of interaction:
Hey Siri, arrange lunch with Sam next week
Working – I’ll get back to you shortly …
Ok, I arranged lunch with Sam for 1pm next Wednesday at Bistro Union at Clapham Park
I outlined the way in which this could work:
- I pre-approve contacts who can see my iCloud calendar purely at a busy/free level
- I also choose favorite contacts who are authorized to make appointments with me
- Siri notes the restaurants I like, and cross-references them with ones Sam likes
- It finds a suitable match, and makes the appointment for us
Eight years later, we’re still nowhere near that kind of capability.
I still have a couple of Amazon Echo speakers precisely because they can do things which Siri still can’t, even if they rely on a horribly clunky workaround to do it.
But this stuff is all about trade-offs
We’ve long noted one trade-off with AI, and that’s capability versus privacy.
Apple could choose to go down the same route as Google. It could use all of the data it has about me, tie Siri queries to my Apple ID, and deliver the same level of intelligence and proactive suggestions as Google Home. If it did so, nobody would be saying that Siri lags significantly behind Google’s IA. But Apple makes a deliberate choice not to do so.
Over the years since then, you’ve consistently told us you think Apple makes the right call here.
There’s a similar trade-off today with LLMs, this time between the illusion of intelligence, and the appropriateness of response. We’ve again used examples of this problem to show why Apple is unlikely to be jumping on this bandwagon anytime soon.
Bing told one user they were “wrong, confused, and rude” and demanded an apology, and in another chat offered a Nazi salute phrase as a suggested response.
Kevin Roose of the New York Times published an incredible chat session he had with Bing, in which the chatbot declared that they were in love.
When The Telegraph asked Bing to translate some text, the chatbot demanded to be paid for the work, and supplied a (fictitious) PayPal address for the payment.
When computer scientist Marvin von Hagen told Bing that he might have the hacking skills to shutdown the chatbot, and asked a follow-up question, Bing said it considered its own survival over that of humans.
Bing also had a few choice words for engineer and physicist Pidud, calling him a psychopath and a monster, among other things.
We can expect Apple’s usual approach here
Apple isn’t going to risk exposing its customers to a tech which clearly isn’t anywhere near ready for prime-time – any more than it was willing to sacrifice user privacy to make Siri smarter.
Right now, LLMs are very much at the “move fast and break things” stage. Google, Meta, and Microsoft have all demonstrated a willingness to do do this; Apple has not, and that isn’t going to change.
Photo: Alex Knight/Unsplash
FTC: We use income earning auto affiliate links. More.
Comments