Skip to main content

Multi-cam support in iOS 13 allows simultaneous video, photo, and audio capture

In iOS 13, Apple is introducing multi-cam support allowing apps to simultaneously capture photos, video, audio, metadata, and depth from multiple microphones and cameras on a single iPhone or iPad.

Apple has long supported multi-camera capture on macOS since OS X Lion, but up until now, hardware limitations prevented it from rolling out APIs for iPhones and iPads.

The new feature and APIs in iOS 13 will allow developers to offer apps that stream video, photos, or audio, for example, from the front-facing camera and rear cameras at the same time.

iOS 13 Multi-cam support w/ AVCapture 

At its presentation of the new feature during WWDC, Apple demoed a picture-in-picture video recording app that recorded the user from the front-facing camera while simultaneously recording from the main camera.

The demo app also enabled recording of the video and the ability to swap between the two cameras on the fly during playback in the Photos app. The feature will also give developers control over the dual TrueDepth cameras including separate streams of the Back Wide or Back Telephoto cameras if they choose.

The new multi-cam feature will be supported in iOS 13 for newer hardware only including the iPhone XS, XS Max, XR, and iPad Pro.

Apple listed a number of supported formats for multi-cam capture (pictured above), which developers will notice does impose some artificial limitations versus the camera’s normal capabilities.

Due to power constraints on mobile devices, unlike on Mac, iPhones and iPads will be limited to a single session of multi-cam, meaning you can’t do multiple sessions with multiple cameras or multiple cameras in multiple apps simultaneously. There will also be various supported device combinations dictating what combination of capture from what cameras are supported on certain devices.

It doesn’t appear that Apple itself is utilizing any new multi-cam features in the iOS 13 Camera app, but we’d imagine it’s something on the horizon now that it’s officially rolling out support in AVCapture.

Semantic Segmentation Mattes

Also new for camera capture in iOS 13 is Semantic Segmentation Mattes. In iOS 12, Apple used something it calls Portrait Effects Matte internally for Portrait Mode photos to separate the subject from the background. In iOS 13, Apple is introducing what it calls Semantic Segmentation Mattes to identity skin, hair, and teeth and improve these maps further with an API for developers to tap into.

In its WWDC session, Apple showed the new tech with a demo app that allowed the subject in the photo to be separated from the background and the hair, skin, and teeth to be isolated to easily add effects including face paint and hair color changes (pictured above).

Developers can learn more about multi-cam support and semantic segmentation mattes at Apple’s website where it also has sample code for the demo apps.

FTC: We use income earning auto affiliate links. More.

MacStadium Private Cloud for Mac
You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Check out 9to5Mac on YouTube for more Apple news:

Comments

Author

Avatar for Jordan Kahn Jordan Kahn

Jordan writes about all things Apple as Senior Editor of 9to5Mac, & contributes to 9to5Google, 9to5Toys, & Electrek.co. He also co-authors 9to5Mac’s Logic Pros series.


Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications