Skip to main content

Apple Intelligence likely the safest way to use ChatGPT; ‘a data hoover on steroids’

The latest version of ChatGPT has been described as “a data hoover on steroids” as a result of its new capabilities (like seeing everything happening on your screen) and extremely loose privacy policy.

While Apple Intelligence will use ChatGPT as a fallback option for queries which cannot be answered by the new Siri, Apple has put in place additional safeguards which will likely make it the safest way to use the chatbot …

ChatGPT is ‘a data hoover on steroids’

Wired reports a number of AI experts expressing concern about the privacy of personal data when using OpenAI’s latest model, ChatGPT-4o. The company’s casual attitude to privacy was highlighted when it was revealed that the Mac app stored chat logs in plain text.

The current model allows you to ask verbal questions and to give access to your device’s camera to see what you are seeing, and the company’s privacy policy appears to make both your voice and your images fair game for training.

AI consultant Angus Allan says the privacy policy gives the company permission to use all of the personal data exposed to it.

“Their privacy policy explicitly states they collect all user input and reserve the right to train their models on this.”

The catch-all “user content” clause likely covers images and voice data too, says Allan. “It’s a data hoover on steroids, and it’s all there in black and white. The policy hasn’t changed significantly with ChatGPT-4o, but given its expanded capabilities, the scope of what constitutes ‘user content’ has broadened dramatically.”

Another consultant, Jules Love, agrees.

“It uses everything from prompts and responses to email addresses, phone numbers, geolocation data, network activity, and what device you’re using.”

Apple Intelligence use of ChatGPT is more private

Apple’s own AI offers an “extraordinary” level of privacy, and even when it falls back to ChatGPT, the company’s deal with OpenAI means that privacy protections are still strong.

Apple anonymizes all ChatGPT handoffs, so OpenAI’s servers have no idea who has made a particular request, or who is getting the response. Apple’s agreement with OpenAI also ensures that data from these sessions will not be used as training material for ChatGPT models.

9to5Mac’s Take

There are still potential privacy risks, but it seems pretty clear that once Apple Intelligence is fully live, it will be by far the safest way to use ChatGPT.

Image: 9to5Mac collage of Apple icons

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications