Skip to main content

Can we ever trust photos again, in an AI age? Apple and others working the problem

At a time when you can ask AI to modify an existing photo in almost any way you please, or even ask it to generate a completely artificial image, can we ever trust photos again?

Apple is working to address the issue in two ways, and many of us are hoping it will also join an emerging new standard for content authenticity …

This is not a new problem

Whether we can trust the veracity of a photo is, of course, not a new problem. A photographer has always been able to influence our view of reality by simply choosing what to include or exclude in the framing.

Their choice of lens can distort the same view in different ways, for example making things look closer or further away. Adjusting the exposure can make things look darker or lighter. Choice of film or filter can affect color perception. A long exposure can be used to remove any signs of people walking through the scene. And so on.

Then there was the darkroom. Things like masking, burning, and dodging could dramatically change the appearance of a scene. Conventional photo editing software simply extended the range of adjustments that could be made.

AI photo editing and creation is just the latest in a long line of ways photos can be used to influence us, or even outright deceive us.

But AI does make it a bigger issue

Previously, photo editing has at least required some degree of skill, so creating a convincingly deceptive image wasn’t something that could be done by anyone.

Now, anyone can do it, and in seconds. The fake image above is of course deliberately uncontroversial, but it could just as easily have been something appearing to show someone or something in a very unflattering light. The entire process of uploading a reference photo into Photoshop, creating a text prompt and waiting for the result took about 30 seconds.

When anyone can do that in seconds, and the results are getting ever more realistic, that potentially creates huge problems for the future.

Apple is tackling this in two ways

With the (gradual) launch of Apple Intelligence, the company could potentially be making the problem worse, by putting AI image creation tools into the hands millions more people. But the company is very conscious of the dangers, and is taking two precautions.

First, Image Playground – which allows iPhone owners to create completely imaginary images from text descriptions – deliberately excludes photo-realistic results.

Image Playground supports three styles: Animation, Illustration, Sketch […] It’s noteworthy that all of the images demoed for Image Playground are animated or otherwise unrealistic in style. Clearly, Apple doesn’t want to provide tools for creating realistic-looking fake photos.

Second, when you use AI tools to manipulate real photos, Apple tags this fact. This was something Craig Federighi talked about in his WSJ interview.

There’s a great history to photography, and how people view photographic content as something they can rely on, as indicative of reality. Our products, our phones, are used a lot and it’s important to us that we help purvey accurate information and not fantasy.

We make sure even if you do remove a little detail of a photo [like a water bottle] we update the metadata on the photo so someone can go back and check that this was an altered photo.

The Content Authenticity Initiative

So far, Apple is going its own way with its metadata updates, but the underlying approach is one supported by many companies and news organizations through something known as the Content Authenticity Initiative (CAI).

The idea is to ensure that any AI creations and edits are reflected in the image metadata in a standard way which can be detected and flagged.

Content Credentials help you record and display the most important details about a piece of content at every step of its lifecycle.

Creation: Show supporting Al tools, software, digital cameras, and other
devices used in content creation.

Editing and generative Al: Any change can be recorded — cropping, adding and deleting, Al modifications, and more.

Publishing: Consumers can easily access Content Credentials creation.
and edit history.

CAI supporters include Adobe, ABC News, Associated Press, BBC, Canon, Leica, Microsoft, Nikon, The New York Times, Nvidia, Qualcomm, Reuters, and the Wall Street Journal.

The standard is open-source, meaning anyone can adopt it, so has a good chance of becoming an industry standard. It would obviously be helpful if Apple ensured its own metadata edits were compliant with this standard.

Image: AI creation in Photoshop from a Vision Pro photo and text prompt

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications