Skip to main content

More from Apple’s AI talk: LiDAR research & beating Google’s image recognition algorithm

Earlier this week Apple’s director of AI research Russ Salakhutdinov made an appearance at a machine-learning conference to announce that Apple will allow its researchers to freely publish their findings, a rare move for the normally secretive company. Today a new report from Quartz offers more details from Apple’s talk at the NIPS 2016 conference, including slides that detail AI research related to “volumetric detection of LiDAR” and “prediction of structured outputs” next to visuals of vehicles and more.

The talk also highlighted other AI research and problems the company is exploring:

Apple, unsurprisingly, is working on a lot of the same problems as other companies that are exploring machine learning: recognizing and processing images, predicting user behavior and events in the physical world, modeling language for use in personal assistants, and trying to understand how to deal with uncertainty when an algorithm can’t make a high-confidence decision… One presentation slide that summarized the company’s research featured two pictures of cars, to illustrate “volumetric detection of LiDAR” and “prediction of structured outputs.”

The mention of LiDAR and other technologies often key components to autonomous and self-driving vehicle systems is notable as Apple earlier this month confirmed its ambitions to work on autonomous systems to transform ‘the future of transportation’. The confirmation came in the form of a letter to the U.S. National Highway Traffic Safety Administration (NHTSA) in which the company said it was “excited about the potential of automated systems in many areas, including transportation.”

It’s thought Apple’s research in the space is related to its secretive car project, which as of October had reportedly halted interest in hardware to focus on an autonomous self-driving platform ‘for now’. A separate report back in September said the company had fully autonomous vehicles on closed routes as it looked to reboot the project. 

In another part of the talk, Salakhutdinov reportedly detailed Apple’s superiority in image recognition, noting that it was able to “process twice as many photos per second as Google’s (pdf), or 3,000 images per second versus Google’s 1,500 per second, using roughly one third of the GPUs”

The comparison was made against algorithms running on Amazon Web Services, a standard in cloud computing… Another slide focused on Apple’s ability to build neural networks that are 4.5 times smaller than the originals with no loss in accuracy, and twice the speed…

More from Apple’s talk at Quartz.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Jordan Kahn Jordan Kahn

Jordan writes about all things Apple as Senior Editor of 9to5Mac, & contributes to 9to5Google, 9to5Toys, & Electrek.co. He also co-authors 9to5Mac’s Logic Pros series.


Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications