Apple’s artificial intelligence (AI) chief says that Apple is using machine learning in almost every aspect of how we interact with our devices, but there is much more to come.
John Giannandrea says he moved from Google to Apple because the potential of machine learning (ML) to impact people’s lives is so much greater at the Cupertino company …
Apple using machine learning today
Giannandrea spoke with ArsTechnica’s Samuel Axon, outlining how Apple uses ML now.
There’s a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff we’ve released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where we’re not using machine learning.
It’s hard to find a part of the experience where you’re not doing some predicative [work]. Like, app predictions, or keyboard predictions, or modern smartphone cameras do a ton of machine learning behind the scenes to figure out what they call “saliency,” which is like, what’s the most important part of the picture? Or, if you imagine doing blurring of the background, you’re doing portrait mode […]
Savvy iPhone owners might also notice that machine learning is behind the Photos app’s ability to automatically sort pictures into pre-made galleries, or to accurately give you photos of a friend named Jane when her name is entered into the app’s search field […]
Most [augmented reality] features are made possible thanks to machine learning […]
Borchers also pointed out accessibility features as important examples. “They are fundamentally made available and possible because of this,” he said. “Things like the sound detection capability, which is game-changing for that particular community, is possible because of the investments over time and the capabilities that are built in” […]
All of these things benefit from the core machine learning features that are built into the core Apple platform. So, it’s almost like, “Find me something where we’re not using machine learning.”
He was, though, surprised at areas where Apple had not been using ML before he joined the company.
“When I joined Apple, I was already an iPad user, and I loved the Pencil,” Giannandrea (who goes by “J.G.” to colleagues) told me. “So, I would track down the software teams and I would say, ‘Okay, where’s the machine learning team that’s working on handwriting?’ And I couldn’t find it.” It turned out the team he was looking for didn’t exist—a surprise, he said, given that machine learning is one of the best tools available for the feature today.
“I knew that there was so much machine learning that Apple should do that it was surprising that not everything was actually being done.”
That has changed, and will continue to change, however.
“That has changed dramatically in the last two to three years,” he said. “I really honestly think there’s not a corner of iOS or Apple experiences that will not be transformed by machine learning over the coming few years.”
Privacy-first approach actually better
It’s long been thought that Apple’s privacy focus – wanting to do everything on the device, and not analyzing huge volumes of personal data – means that it can’t compete with Google, because it can’t benefit from masses of data pulled from millions of users. Giannandrea says this is absolutely not the case.
I understand this perception of bigger models in data centers somehow are more accurate, but it’s actually wrong. It’s actually technically wrong. It’s better to run the model close to the data, rather than moving the data around.
In other words, you get better results when an ML model learns from your usage of your device than when it relies on aggregated data from millions of users. Local processing can also be used in situations where it simply wouldn’t be realistic to send data to a server, like choosing the exact moment to act on you pressing the Camera app shutter release button for the best frame.
What of the future?
Understandably, Giannandrea wouldn’t be drawn on what Apple is working on now, but did give one example of what might be possible when you combine the power of Apple Silicon Macs with machine learning.
Imagine a video editor where you had a search box and you could say, “Find me the pizza on the table.” And it would just scrub to that frame.
The whole piece is very much worth reading.
Photo: WFMJ
FTC: We use income earning auto affiliate links. More.
Comments