Adobe's Maria Yap sharing a brief history of photography at Adobe MAX 2017.
View Comments

This year’s Adobe MAX conference was dominated by chatter about machine learning, artificial intelligence, and specifically, Adobe Sensei. The importance of these emerging technologies was repeatedly reinforced not only in the conference’s opening keynote, but on the show floor and in sneak peaks of upcoming products.

9to5Mac sat down at MAX with Tom Hogarty, Adobe’s Director of Photography Product Management, to talk about the rise of computational photography and how products like the iPhone and Mac have played a role in redefining how we think about photos.

Tweetbot For iOS

Sensei is described by Adobe as a creative assistant, technology that learns from each user and is trained by hundreds of millions of assets in the Creative Cloud library. The company’s vision for the future of Sensei is wide, but today the capabilities are being used to help creatives become better photographers.

“I always think of my photos as the intersection of who, what, where, or when. And we’ve got the what and the where and the when covered. We’re still working on the Adobe Sensei technology for face recognition and detection, but the management side of it is powerful,” Hogarty said, speaking about “Best Photos”, a new AI-powered feature in Lightroom CC that automatically curates what it believes are your best shots in a group based on aesthetic ranking.

“The end goal is that we help photographers become better photographers. Everyone wants their images they share on Instagram or social networks to be persuasive and powerful, and being a better photographer helps them do that. That’s where I feel like Sensei can come in and help people interested in photography become better.”

Adobe believes that Sensei will succeed where others cannot because of the company’s user base of professional photographers that provide deep insight into the photography process as Sensei learns. “Because we’re working with some of the world’s best creatives, we can employ machine learning to learn from their creativity and their skills,” Hogarty said. “So one of the things that I’m excited about is looking at ways we can learn from a customer’s own editing capabilities and give that back to them. What if Creative Cloud could start to learn from that work, and start to shortcut the effort I have to put into my images by creating something like an auto that’s tailored? It’s not something we’re shipping right now, but if you look into the future you can image a place where we can use machine learning like Sensei to take some of the heavy lifting of editing.”

While Adobe has excelled in terms of editing images in post-production, Apple has made strides in improving the capture experience, with features like the TrueDepth camera system in the upcoming iPhone X. Hogarty claims that Adobe isn’t standing still on this front, either. “It’s funny, the depth map that the new iPhone is using is something we had actually demoed many years ago, I think back in 2006, because we had done a lot of research on depth maps and plenoptic cameras and imaging,” he said. “One of the benefits of us being on the capture device in terms of the smartphone is that we can start to leverage some of that. We’ve already started doing that using the DNG file format on iOS and Android to get the highest quality image, but then taking it a step further, we can do HDR capture that’s a burst of three DNG files that are then merged into a single, truly high dynamic range file. At its heart, that is computational photography. Now as we’re sitting on the smartphone capture device, we can become part of the capture experience, just getting closer and closer to the metal of the beginning of the photography process.”

Hogarty was quick to mention that Adobe has no plans to enter the world of novelty face filters and Animoji, adding, “Maybe I should rethink this, but we tend to keep a fairly traditional perspective on photography. Lightroom isn’t going in the world of Snapchat with fuzzy faces and unicorn stuff. If that’s a direction customers want us to take, we can explore that. But right now, I think our core photography customer is more traditional, non manipulated AR-style.”

Adobe’s CEO Shantanu Narayen addressed concerns surrounding artificial intelligence onstage at MAX, saying that “AI is not a substitute for creativity.” Everyone I spoke to from Adobe appeared to firmly agree, but Hogarty in particular, seemed deeply committed to maintaining a balance between creativity and technology.

“There will be a push-pull. How much do modern light meters in cameras take some of the creative process out of the capture experience? The flipside of that is you’re now enabling a creative person to think less about the technology or the technical aspects of the profession and more about the craft and representing what they see through a photograph. I think we’re always going to have a healthy push-pull,” he said. “That’s the thing I love about photography – it’s never stood still, because photography is inextricably linked to technology. From plates, to rolled film, to color positive film, to color negative film. Each one of those transitions gave people pause and thought about what photography really means. I think it’s awesome, because what we’ve seen is more democratization, lower barriers to entry for people with a passion about photos.”

As photography continues to change, so do the tools that creatives use. Many professionals have expressed concern in recent years about a shifting focus away from the desktop as customers flock to mobile devices. Adobe, too, has recognized this shift. “It’s a trend I’m watching,” Hogarty said. “I do think because this [the smartphone] is the capture device, and this is also a great platform for social and sharing communication, that the role of the desktop could diminish, but I think we need to keep investing in all platforms because at the end of the day, I think about my content consumption. I definitely watch less TV on TV, but it hasn’t changed my attitude towards content in general.”

The upcoming iMac Pro and promised new Mac Pro have kept the hope of a desktop renaissance alive for now, but it’s up to companies like Adobe to optimize its software to take full advantage of powerful new hardware.

When asked about enhanced support for Apple’s upcoming products, Hogarty said, “I can’t comment on anything specifically, but you know one of the things that’s always been – and this goes back in Photoshop’s history- is to squeeze every last drop of power and capability out of current hardware. We’re not shipping a web browser, we’re not loading web pages. We’re working with high-resolution image files, so we are always looking at what Apple is doing, what Microsoft is doing, other Windows hardware, iOS hardware, Android hardware, to eek out every last drop. I think you can see that when we added DNG HDR capture, we had to limit the number of devices we could support, because it really was squeezing the most out of the latest hardware.”

Powerful iOS hardware has paved the way for a new market of mobile creative applications, like Affinity Photo, Pixelmator, and Enlight, all of which compete directly with Adobe’s own products. Hogarty reiterated that Adobe’s unique customer relationship is what will allow them to remain competitive in this growing market. “There is no doubt it is a competitive and fragmented market on mobile, and so the onus is on us to use our knowledge of the photography industry, our work with photographers over the decades to build a solution that meets the needs of people interested in photography. The first step was extending that workflow from the desktop to mobile for traditional desktop photographers, but we’re seeing a whole influx of new photographers who are starting more on mobile, and we’re winning the hearts and minds there.”

You can catch up on all of Adobe’s announcements at MAX 2017 in our guide.


Check out 9to5Mac on YouTube for more Apple news:

About the Author

Michael Steeber's favorite gear