Earlier this week Apple’s director of AI research Russ Salakhutdinov made an appearance at a machine-learning conference to announce that Apple will allow its researchers to freely publish their findings, a rare move for the normally secretive company. Today a new report from Quartz offers more details from Apple’s talk at the NIPS 2016 conference, including slides that detail AI research related to “volumetric detection of LiDAR” and “prediction of structured outputs” next to visuals of vehicles and more.
The best 4K & 5K displays for Mac
The talk also highlighted other AI research and problems the company is exploring:
Apple, unsurprisingly, is working on a lot of the same problems as other companies that are exploring machine learning: recognizing and processing images, predicting user behavior and events in the physical world, modeling language for use in personal assistants, and trying to understand how to deal with uncertainty when an algorithm can’t make a high-confidence decision… One presentation slide that summarized the company’s research featured two pictures of cars, to illustrate “volumetric detection of LiDAR” and “prediction of structured outputs.”
The mention of LiDAR and other technologies often key components to autonomous and self-driving vehicle systems is notable as Apple earlier this month confirmed its ambitions to work on autonomous systems to transform ‘the future of transportation’. The confirmation came in the form of a letter to the U.S. National Highway Traffic Safety Administration (NHTSA) in which the company said it was “excited about the potential of automated systems in many areas, including transportation.”
It’s thought Apple’s research in the space is related to its secretive car project, which as of October had reportedly halted interest in hardware to focus on an autonomous self-driving platform ‘for now’. A separate report back in September said the company had fully autonomous vehicles on closed routes as it looked to reboot the project.
In another part of the talk, Salakhutdinov reportedly detailed Apple’s superiority in image recognition, noting that it was able to “process twice as many photos per second as Google’s (pdf), or 3,000 images per second versus Google’s 1,500 per second, using roughly one third of the GPUs”
The comparison was made against algorithms running on Amazon Web Services, a standard in cloud computing… Another slide focused on Apple’s ability to build neural networks that are 4.5 times smaller than the originals with no loss in accuracy, and twice the speed…
More from Apple’s talk at Quartz.