Skip to main content

Apple camera patent would allow high-resolution photos without sacrificing image quality

camera

If you were wondering why Apple has ignored the megapixel race and stuck to a modest 8MP camera in its latest iPhones when almost every other manufacturer is cramming in as many pixels as physically possible, it’s all about image quality. While more pixels allow you to blow up photos to larger sizes, that comes at a cost. Squeezing more pixels into a tiny sensor means more noise, reducing quality, especially in low-light situations like bars and parties.

A clever patent granted today could allow future iPhones to have the best of both worlds, allowing higher-resolution photos without squeezing more pixels into the sensor … 

The secret is effectively to use burst-mode to shoot a series of photos, using an optical image stabilization system – like that built into the iPhone 6 Plus – to shift each photo slightly. Combine those images, and you have a single, very high-resolution photo with none of the usual quality degradation. Or, in patent language:

A system and method for creating a super-resolution image using an image capturing device. In one embodiment, an electronic image sensor captures a reference optical sample through an optical path. Thereafter, an optical image stabilization (OIS) processor to adjusts the optical path to the electronic image sensor by a known amount. A second optical sample is then captured along the adjusted optical path, such that the second optical sample is offset from the first optical sample by no more than a sub-pixel offset. The OIS processor may reiterate this process to capture a plurality of optical samples at a plurality of offsets. The optical samples may be combined to create a super-resolution image.

The principle of combining multiple photos into one isn’t new: it’s how the HDR function works. What’s new here is shifting the image on the sensor between shots. This does, though, mean that it will only work with static photos, not with moving subjects or video.

As ever with Apple patents, there’s no telling if or when this one might make it into an iPhone, but with 4K and 5K monitors fast becoming mainstream, the ability to shoot high-resolution photos would certainly be handy.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

  1. Bob (@CocoaBob) - 9 years ago

    Poor Hydra and CortexCam…

  2. vandiced - 9 years ago

    I read on cnet an article comparing pictures between iphone 6, Galaxy s6, and htc m9 or whatever it is called. They all look very similar to me, only that in say galaxy s6 you can zoom in to view more details. Am I the only one that doesn’t see a vast improvement of one versus the other? Not sure what this means “Squeezing more pixels into a tiny sensor means more noise, reducing quality, especially in low-light situations like bars and parties.”

    • vandiced - 9 years ago

      *Other than iphine images being smaller in size to save on device which might be the real reason Apple doesn’t Up the megapixels.

    • David Call (@DavidCall) - 9 years ago

      It means that you can have a varying amounts of pixels on a sensor that’s the same size. To cram more in means you have a higher resolution, but to have less photo receptors (or “pixels”) means that each one is larger, which means they let in more light, which means you don’t have to crank more power to each to gather light, which means less noise. Noise is the term for what grain was in film. It takes away sharpness and in the digital sensors, it’s the cause of those weirdly colored one-off pixels found in low-light photos (bars and parties, as Ben mentioned). Larger pixels helps combat that by not having to crank up the power to the photo receptors which makes a better quality image, though you can’t zoom in as far without starting to see each pixel. Until the release of the Canon 5Ds, that was Canon’s shining star, I thought. Nikon had that huge megapixel D800 (36MP) and Canon’s direct competitor (the 5D Mk III) had 24MP. You couldn’t zoom in as far, but the file size was more manageable and the image quality didn’t suffer in low light.

    • Ben Lovejoy - 9 years ago

      The more pixels in any given sensor size, the more noise (distortion) you’ll see in higher ISO photos.

      • George Pollen - 9 years ago

        Then slow down the shutter (like Apple is effectively doing by capturing multiple images over time) and use a lower ISO.
        Even with a stationary subject, I already see image artifacts (straight lines that appear wiggly) with the 6+.

    • Scott (@ScooterComputer) - 9 years ago

      The point about “Squeezing more pixels into a tiny sensor means more noise” is true…as fair as it is. Meaning, when the initial mega-megapixel sensors came out, yes, that was true. However the sensor manufacturers have also IMPROVED on them, negating many of those problems. Which pretty much makes Apple’s messaging now nothing more than marketing bullshit. They’ve once again flogged old tech beyond its marketing lifespan. That the trope is still being propagated in light of the evidence (the available data for the S6 and Note 4) is rather disheartening; which is not to say I’m calling out “fanboyism”, just that such messaging sometimes is hard to kill after its been said.
      Now, before some nitwit jumps on here and says, “Yeah, well the iPhone 6 camera compares favorably/as good as/better than the Note 4/S6…” blah blah blah. Yes, Apple’s camera is good. The sensor is good. The image quality is good. You are right. Happy? I’m simply saying that the marketing-speak is no longer justifiable, since the competing sensors of larger mega-pixel have caught up on the noise issue, and that Apple CHOSE to continue using their less-pixel sensor on the iPhone 6—expanding their margins—rather than invest in a bigger sensor that would have been unquestionably EVEN BETTER than the sensor in the Note 4/S6/etc. Either way, the message about “noisy big sensors” is no longer accurate. Obviously Apple held such refinements in reserve for the iPhone 6s.

      • vandiced - 9 years ago

        I agree with Scott. I will say that yo me the S6 photos are just as good as iphone on the surface. However zooming in there is definitely more detail so I still don’t understand this while “noise” thing with higher megapixels.

      • thebums66 - 9 years ago

        Do me a favor. Hold your S6 next to a IP6 and make sure both have the image filling the screen about the same amount. You’ll first notice the over saturation of the image on the S6. Now take a picture. Look at both images. Still over saturated on both but both very good quality. Now double tap somewhere on the IP6 image which zooms in about 90-95%. Now zoom in the S6 to match the size of the image of the IP6. Still about the same. Now start zooming in on the S6 image and the image gets grainy to the point you can’t recognize what you’re looking at. Sure you can zoom twice as much but does it matter when the image suffers so much?

      • thebums66 - 9 years ago

        Edit
        …look at both images. Still over saturated on S6 but both very good…

      • George Pollen - 9 years ago

        @thebums66: the S6 likely appears oversaturated because of the factory default display configuration on the S6. It’s purely for eye candy meant to sell, not to look realistic.

      • Mike Knopp (@mknopp) - 9 years ago

        Wow! I didn’t realize that Samsung and Nokia and everybody else had managed to break the laws of physics.

        “Squeezing more pixels into a tiny sensor means more noise” is pretty much mandated by the laws of physics.

        First, a sensor doesn’t have pixels. It has a grid of photosensors. A combination of the output from these photosensors when coupled with a Bayer filter is what is converted into a pixel of a photograph.To grossly simplify this the more photosensors on a given size of image sensor the smaller the collection surface of the photosensor has to be, which decreases its sensitivity, and/or the closer the edges of the photosensors have to be to each other (the photosensors have to have a neutral insulative region between them), which increases the amount of cross-talk between photosensors. What this all means is that the more photosensors one places on a given size of image sensor the smaller the photosensor has to be, which as an earlier person said means that the “power” to the photosensor has to be increased. The closer that the photosensor edges are to each other and the more “power” is applied to them the more likely an anomalous reading from any given photosensor will occur. These anomalies are one cause of noise. (The crosstalk from applied “power” is the reason that black shots were combined with real shots to remove noise. An image captured when the shutter was closed would still be noisy from the crosstalk. If the black image and the real image are taken close enough together in time then there is a high probability then the noise in the black image will also appear in the real photo and therefore the noise can be compensated for or at least attempted to be.) The last big physical development that I read about to combat the actual creation of noise at the sensor level was when Canon created a grid of lenses that lined up a single lens with each photosensor. This allowed for the photosensor to decrease its size, thus increasing the space between the edges and still capture the light from an area as if it were still large because the lens focused the larger area down to a smaller area. Beyond this you are going to have to start getting into some very interesting games like playing with the material or eliminating the need for a filter (which is what Fovean did).

        So, how are the big players in cameras reducing noise on their densely packed image sensors? They incorporate software noise reduction algorithms. Again a gross simplification but it is essentially a really advanced blurring algorithm. Of course, as with everything there are tradeoffs. Every noise reduction algorithm, no matter how good removes clarity from the image.

        That is why when you read any in-depth review of digital cameras on sites like dpreview or the-digital-picture and you will find analysis or reviews of the noise-reduction algorithms. Along with full sized comparisons of high detail objects. Every consumer digital camera that I know of uses a noise reduction algorithm today. I am sure that both the iPhone and all of their competitors use them. The key is that the more noise generated by the sensor the more aggressive the noise reduction has to be. The more aggressive the noise reduction the more detail is lost in the final photo.

        These algorithm are what the S6 and the Note4 incorporate to reduce noise in their photos. Noise which is introduced because they are packing more pixels into a tighter space, which makes the noise reduction algorithm that they have to use more aggressive. Which, despite having more pixels and the ability to zoom in on an area better it reduces the clarity of the image.

        So, I disagree. The message about “big noisy sensors” is still very much true. The laws of physics still work. It is just how much software trickery one is comfortable with.

        I for one am very glad that Apple is sticking with 8MP cameras. I may only ever print 1 out of every 1000 photos. And of those 0.1% I may only print 1 out of every 100 larger than a 4×6. So, I think I would rather have a clearer image that takes less space on my phone.

    • Jurgis Ŝalna - 9 years ago

      Having smaller ‘buckets’ in CMOS sensors would potentially allow cramming more pixels while still keeping them as sensitive (aka, there is more light sensitivity than pixel size). Neither Apple nor Samsung do not design their own CMOS sensors, so both are dependant on the innovations and miniaturisation processes from third parties.

  3. fezvrasta - 9 years ago

    Be prepared to see your friends’ faces split up in blocks thanks to this awesome technology :P

  4. Greg Kaplan (@kaplag) - 9 years ago

    When ios 7 came out with the parallax feature I thought it would be cool if the camera did something like this but to generate a photo you could tilt to look around in a little. Add some depth when used as a wallpaper. But that probably wouldn’t have the effect I was thinking from moving around from the same pivot point.

    • The Lytro Camera lets you do that.

      • Greg Kaplan (@kaplag) - 9 years ago

        I knew about changing focus after the shot, but didn’t know it gave a little 3d tilt to them. The focus thing is awesome. I feel like the 3d part has potential to be gimmicky but then again, the whole parallax on the home screen thing is a gimmick, might as well make it more interesting.

        I wonder if they could have done something where the extra info is used for parallax on a phone, but then on a computer it makes a larger res image.

  5. chrisl84 - 9 years ago

    Apple just patented “photo stitching”? Thought this was a thing for a long ass time already.

  6. sstoy - 9 years ago

    Its odd that apple woudl be granted a patent for sensor shift high res imaging considering Hasselblad and Olympus are already doing this. I hope Apple has to pay a licensing fee to use this technology they did nothing to actually develop.
    http://www.wired.com/2015/02/olympus-omd-em5-markii/

    • kevicosuave - 9 years ago

      That’s the problem with articles about patents. They describe the patent in a way that the audience can understand, which is nice, but it often avoids the details that justify the patent. You can have multiple patents that end up with the same result, but do things differently to get there.

    • Mike Knopp (@mknopp) - 9 years ago

      They only have to pay if they actually end up using it. Oddly enough it isn’t infringement to get a patent on something that has already been patented.

      That being said, kevocsuave, is right. I wouldn’t be surprised that if you were to look at the patent by Apple it might even mention the patent used by Olympus and Hasselblad as a prior patent which is relevant. My guess is that they found some way to improve on it and patented that.

      These patent articles are basically taking very convoluted legal speak and translating them to 6th grade level English (that is about the level that most news is written to the last time I checked). So, some details are bound to get lost.

    • Wyatt - 9 years ago

      You can’t get a patent for someone else’s technology. Apple is accomplishing the same or similar results but using different techniques to get there so that is patentable.

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear