While all four iPhone 15 models are expected to get the 48MP stacked sensor tech used in the iPhone 14 Pro and Pro Max, next year’s iPhone 16 Pro cameras may again pull ahead of the base and Plus models.
A new supply chain report today isn’t specific on the details, but reading between the lines does seem to point to a new Sony tech that roughly doubles the low-light sensitivity …
Stacked sensors in the iPhone 14 and iPhone 15
While Samsung and others have long chased impressive-sounding megapixel numbers for their cameras, Apple resisted the temptation to join them. That’s because squeezing a lot of pixels into a small sensor has one major downside: the high-pixel density results in poorer low-light performance.
That problem was finally solved with Apple’s adoption of a stacked sensor, which has multiple layers to retain low-light sensitivity. The tech was first brought to the two iPhone 14 Pro models, and is this year expected to come to the full iPhone 15 lineup.
iPhone 16 Pro cameras to use new sensors
Ming-Chi Kuo’s latest report is frustratingly light on detail, but does indicate that the two iPhone 16 Pro models will use a different sensor to the base models – and confirms that the sensors will again be Sony ones.
Two 2H24 iPhone 16 Pro models will also adopt stacked-designed CIS, so Sony’s high-end CIS capacity will continue to be tight in 2024.
The report itself is focused on a Sony competitor, noting that as Apple will be using most of Sony’s smartphone sensor output, Semi will receive more orders from Android brands.
May point to larger photo diodes
The report gives no clue on which Sony stacked sensor tech will be used in the iPhone 16 Pro cameras, but we can make an educated guess.
Sony’s latest stacked sensor tech separates out the photo diodes and pixel transistors, which are normally combined into a single layer. This allows the photo diodes themselves – the bit which actually captures the light – to be significantly larger for the same overall pixel size.
A recent Sony promo video (below) explains that instead of having to locate the diodes alongside the transistors, limiting their size, moving them to a separate layer means that both can be larger. That means the diodes capture more light, and the transistors remove more noise.
Sony says that roughly twice as much light is captured.
While new camera sensor tech typically makes it into high-end cameras first, and smartphones later, the opposite is true here. The first known use of the tech is in Sony’s own Xperia 1 V smartphone. Digital Camera World speculates that this is because of scaling challenges.
You might also wonder why Sony is debuting this potentially game-changing sensor tech in a camera phone and not a potential new flagship camera like an a9 III or a1 II. If we were to speculate, given the stated technical challenges producing a dual-layer sensor, it’s possible current manufacturing processes are simply only able to produce physically small sensor chips. Scaling the technology up to full-frame sensor size could well require more time.
What does this mean for iPhone photos?
Capturing more light, and removing more noise, means better photos in two common situations.
First, low light conditions. Taking a photo inside a restaurant at night is a classic example, where establishments deliberately keep light levels low to create a romantic/stylish atmosphere. But many photos of babies and toddlers are also taken indoors, and light levels at home can also be low, especially in winter.
Second, high-contrast lighting. The classic example here is outdoors in bright sunlight, where the dynamic range – the difference in light levels in sunlit and shaded areas – is huge. Shooting into the light is another common example, for example a photo of a person against a sunset.
Apple and Sony have long prioritized these two situations, and if this is indeed the tech which makes it into the iPhone 16 Pro cameras, we can expect another fairly dramatic improvement next year.
FTC: We use income earning auto affiliate links. More.
Comments