A forthcoming developer beta of iOS 13 will add Apple’s new Deep Fusion photography system, according to the Verge. Deep Fusion is the new camera processing system introduced by Apple last month for the iPhone 11 and iPhone 11 Pro. [Update: iOS 13.2 beta 1 coming tomorrow, according to Daring Fireball.]
Deep Fusion was billed as an iPhone 11 and iPhone 11 Pro feature, though it was not available when the devices launched last month. Apple says that Deep Fusion processing will be most important for “medium-to-low light images.”
What differentiates Deep Fusion from Night mode, however, is that Deep Fusion is completely invisible to the user. That means you won’t be able to tell when it’s running in the background, but you should notice considerably better image quality. Apple told the Verge that this is part of its strategy.
The Verge offers a quick rundown on how Deep Fusion works:
- By the time you press the shutter button, the camera has already grabbed three frames at a fast shutter speed to freeze motion in the shot. When you press the shutter, it take three additional shots, and then one longer exposure to capture detail.
- Those three regular shots and long-exposure shot are merged into what Apple calls a “synthetic long” — this is a major difference from Smart HDR.
- Deep Fusion picks the short exposure image with the most detail and merges it with the synthetic long exposure — unlike Smart HDR, Deep Fusion only merges these two frames, not more. These two images are also processed for noise differently than Smart HDR, in a way that’s better for Deep Fusion.
- The images are run through four detail processing steps, pixel by pixel, each tailored to increasing amounts of detail — the sky and walls are in the lowest band, while skin, hair, fabrics, and so on are the highest level. This generates a series of weightings for how to blend the two images — taking detail from one and tone, tone, and luminance from the other.
- The final image is generated.
When you first go to your Camera Roll after taking an image, you’ll see a “proxy image,” while Deep Fusion runs in the background. This process should only take just one second longer than Smart HDR. It means, however, that Deep Fusion won’t work in burst mode.
Phil Schiller described Deep Fusion as “computational photography mad science” when it was first introduced last month. At that time, Apple only said that it was coming sometime later this fall. While the early results from the iPhone 11 and iPhone 11 Pro cameras are impressive, Deep Fusion has the potential to take things to the next level.
The Verge has a couple of sample Deep Fusion images courtesy of Apple:
While the beta of iOS 13 with Deep Fusion isn’t available just yet, it should be available soon – likely in the form of iOS 13.2.
FTC: We use income earning auto affiliate links. More.
Comments