Skip to main content

Apple’s AI plans uncertain, as pioneers argue about the risks of the technology

There’s been much speculation about Apple AI plans, as the Cupertino company has so far kept a low profile while both Google and Microsoft dive into generative artificial intelligence in a big way. CEO Tim Cook didn’t shed much light when asked about it last week, saying only that AI is “certainly interesting” but there are “a number of issues with the tech.”

The seriousness of those issues is very much in debate. Apple cofounder Steve Wozniak last month joined leading AI academics in calling for a pause in advanced AI development – but two AI pioneers have markedly different views about the risks …

A brief history of chatbots, in four paragraphs

There are many technologies where progress seems very incremental until – bang! – a huge change seems to happen overnight. Generative AI tech like ChatGPT has been one of those.

For more than half a century, people have been working on trying to make computers capable of passing the Turing test: generating interactions which cannot be distinguished from those of a human being.

ELIZA was one of the earliest chatbots which managed to fool people, way back in 1964. All that did was reflect back what someone said, and ask counsellor-style questions, but some of those testing it were convinced it was a person.

ChatGPT took this ability to provide human-like responses to a whole new level. In essence, all the tech actually does is to study texts relevant to the question asked, or instruction given, and use statistics to predict which word is most likely to follow next. It will keep doing this until it has formed a number of sentences or paragraphs. But it has grown increasingly sophisticated, to the point where the output can be incredibly convincing.

Questions asked about Apple’s AI plans

As soon as ChatGPT and Bard made the headlines, people started asking pointed questions about Apple’s AI plans, and suggesting that the company was being left behind.

I voiced my own view that Apple was likely to be cautious in using this tech to make Siri smarter, for two reasons.

First, Apple’s trademark approach to almost all new tech: wait and see. The company rarely tries to be early to market with any new technology. Instead, it watches what others do, then tries to figure out how to do it better.

But there’s a more specific second reason. Systems like ChatGPT are good at appearing to be smart. They write very convincingly, because they have been trained on millions of documents written by human beings, and they essentially borrow liberally from all they’ve seen to replicate everything from specific phrases to document structures.

But they also get all their apparent knowledge from those same millions of sources. Of those, they have no idea how to tell truth from fiction; reasoned stance from prejudice; statistical evidence from bias; reputable data from junk.

The sense of that caution was quickly demonstrated with some Bing chatbot disasters, and more.

Cook was keen to point out last week that Apple already uses AI, and not just for Siri.

The executive listed some of Apple’s achievements in the AI segment, such as using the technology to create features like Fall Detection, Crash Detection, and the Apple Watch’s ECG. “These things are not only great features, they’re saving people’s lives out there,” Cook said.

But he also referenced “issues” with the tech.

Not even the top AI experts can agree on the risks

Not even the most famous AI pioneers in the world can reach agreement on the nature or seriousness of those issues.

A more urgent threat than climate change

Geoffrey Hinton is often referred to as “one of the godfathers of AI.” He is a key figure in the development of neural networks, has written a great many papers, and won numerous awards for his work in the field.

He was so concerned about the dangers of current AI work that he left his role at Google so that he would be free to speak freely about the risks.

He told Reuters just how concerned he was.

Artificial intelligence could pose a “more urgent” threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday […]

He added: “With climate change, it’s very easy to recommend what you should do: you just stop burning carbon. If you do that, eventually things will be okay. For this it’s not at all clear what you should do.”

However, he doesn’t think AI research should be paused: on the contrary, he thinks more work is vital to figuring out how to respond to the risks.

I’m in the camp that thinks this is an existential risk, and it’s close enough that we ought to be working very hard right now, and putting a lot of resources into figuring out what we can do about it.

AI cannot be stopped, but should not be feared

Jürgen Schmidhuber has been called “the father of AI” for his work in natural language processing within neural networks – the technology behind Siri and Google Translate. He has likewise written a huge number of papers and won awards for his work.

While he and Hinton don’t see eye-to-eye on many things, they do both agree that AI development can’t be stopped. Schmidhuber, however, told The Guardian that he believes the dangers are exaggerated.

One country might may have really different goals from another country. So, of course, they are not going to participate in some sort of moratorium. But then I think you also shouldn’t stop it. Because in 95% of all cases, AI research is really about our old motto, which is make human lives longer and healthier and easier.

Many of the fears have been about the risks of AI systems deciding to start ignoring constraints placed on them by humans. Schmidhuber says this is true, but doesn’t mean they will start harming us.

Schmidhuber believes AI will advance to the point where it surpasses human intelligence and has no interest in humans – while humans will continue to benefit and use the tools developed by AI.

9to5Mac’s Take

It would require a braver man than I to take a position on a fight between two of the leading experts in the field! But the debate does seem somewhat academic, given that both agree that pausing AI research is impossible.

On the more mundane matter of when and how Apple will use this type of tech to make Siri smarter, it’s clear that is must do so at some point, else it really will get left behind. It is, however, equally clear that it is unlikely to do so quickly, for the reasons I’ve given previously. In particular, that a spoken answer is more dangerous than one displayed on a screen, because it’s less likely people will take the trouble to fact-check it.

Siri is designed to provide spoken answers to verbal questions. If there’s one thing more annoying than Siri ‘answering’ a question with “Here’s what I found on the web,” it would be “Here’s a lengthy answer which you first have to listen to, then I’ll note that it may not be correct, and recommend that you search the web.”

I very much hope that HomeKit is one of Apple’s priority areas for generative chat.

Please share your own thoughts in the comments.

Photo: Maximalfocus/Unsplash

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing