Skip to main content

Gurman: iOS 18 AI features to be powered by ‘entirely on-device’ LLM, offering privacy and speed benefits

As indicated by much of the research material Apple has been publishing in recent months, the company is investing heavily in all sorts of artificial intelligence technologies. Apple will announce its AI strategy in June at WWDC, as part of iOS 18 and its other new OS versions.

In the latest Power On newsletter, Mark Gurman says to expect the new iPhone AI features to be powered entirely by an offline, on-device, large language model developed by Apple. You can expect Apple will tout the privacy and speed benefits of this approach.

9to5Mac previously found code references in iOS 17.4 that referred to an on-device model called “Ajax”. Apple is also working on server-hosted versions of Ajax too.

The downside to on-device LLMs is they can’t be as powerful as models that are running on huge server farms, with tens of billions of parameters and continually updating data behind them.

However, Apple engineers can probably take advantage of the full stack vertical integration of its platforms, with software tuned to the Apple silicon chips inside its devices, to make the most out of an on-device approach. On-device models are usually much quicker to respond than trafficking a request through a cloud service, and they also have the advantage of being able to work offline in places with no or limited connectivity.

Top comment by Jason McMinn

Liked by 5 people

The real opportunity with Apple and the "privacy first" schema is building personal language models off all your data - files, photos, email, texts, notes, calendar, etc. A model that "is your memory" can be augmented with an LLM from Google if needed. I want the personal language model the most.

View all comments

While on-device LLMs may not have the same embedded rich database of knowledge as something like ChatGPT to answer questions about all sorts of random trivia facts, they can be tuned to be very capable at many tasks. You can imagine that an on-device LLM could generate sophisticated auto-replies to Messages, or improve the interpretation of many common Siri requests, for instance.

It also dovetails neatly into Apple’s stringent adherence to privacy. There’s no harm in churning all your downloaded emails and text messages through an on-device model, as the data stays local.

On-device models may also be able to do generative AI tasks like document or image creation, based on prompts, to a decent result. Apple still has the flexibility to partner with a company like Google to fallback to something like Gemini on the server for certain tasks, too.

We’ll know for sure what Apple plans to do when it officially announces its AI strategy at WWDC. The keynote kicks off on June 10, which will see the company unveil all the new software features coming to iPhone, iPad, Mac, Apple Watch, Apple TV, Vision Pro and more.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Benjamin Mayo Benjamin Mayo

Benjamin develops iOS apps professionally and covers Apple news and rumors for 9to5Mac. Listen to Benjamin, every week, on the Happy Hour podcast. Check out his personal blog. Message Benjamin over email or Twitter.


Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing