Designers teach AI models to generate better UI in new Apple study
Apple continues to explore how generative AI can improve app development pipelines. Here’s what they’re looking at.
Expand Expanding Close
Apple continues to explore how generative AI can improve app development pipelines. Here’s what they’re looking at.
Expand Expanding Close
A group of Apple and Tel-Aviv University researchers figured out a way to speed up AI-based text-to-speech generation without sacrificing intelligibility. Here’s how they did it.
Expand Expanding Close
Apple researchers have published a study about Manzano, a multimodal model that combines visual understanding and text-to-image generation, while significantly reducing performance and quality trade-offs of current implementations. Here are the details.
Expand Expanding Close
Apple hasn’t updated all of its apps for Liquid Glass just yet, but there’s one more joining the list today.
Expand Expanding Close
Apple researchers have developed an AI model that dramatically improves extremely dark photos by integrating a diffusion-based image model directly into the camera’s image processing pipeline, allowing it to recover detail from raw sensor data that would normally be lost. Here’s how they did it.
Expand Expanding Close
Building on a previous model called UniGen, a team of Apple researchers is showcasing UniGen 1.5, a system that can handle image understanding, generation, and editing within a single model. Here are the details.
Expand Expanding Close
The model, called SHARP, can reconstruct a photorealistic 3D scene from a single image in under a second. Here are some examples.
Expand Expanding Close
A few days ago, we looked into how Apple could one day use brain wave sensors in AirPods to measure sleep quality and even detect seizures.
Now, a new paper shows how the company is exploring deeper cardiac health insights with the help of AI. Here are the details.
Expand Expanding Close
A new study by Apple researchers presents a method that lets an AI model learn one aspect of the structure of brain electrical activity without any annotated data. Here’s how.
Expand Expanding Close
Today, Apple published the list of studies it will present at the 39th annual Conference on Neural Information Processing Systems (NeurIPS) in San Diego. Here are the details.
Expand Expanding Close
Apple researchers have published a study that looks into how LLMs can analyze audio and motion data to get a better overview of the user’s activities. Here are the details.
Expand Expanding Close
Apple has released Pico-Banana-400K, a highly curated 400,000-image research dataset which, interestingly, was built using Google’s Gemini-2.5 models. Here are the details.
Expand Expanding Close
Apple has published three interesting studies that offer some insight into how AI-based development could improve workflows, quality, and productivity. Here are the details.
Expand Expanding Close
In a new study, Apple researchers present a diffusion model that can write up to 128 times faster than its counterparts. Here’s how it works.
Expand Expanding Close
Today, Apple confirmed its participation in the 2025 International Conference on Computer Vision (ICCV), which will take place from October 19 to 23 in Honolulu. Here are the studies the company will present.
Expand Expanding Close
Google DeepMind’s work with AlphaFold has been nothing short of a miracle, but it is computationally expensive. With that in mind, Apple researchers set off to develop an alternative method to use AI to predict the 3D structure of proteins, and it shows promise. Here are the details.
Expand Expanding Close
A few months ago, Apple hosted a two-day event that featured talks and publications on the latest advancements in natural language processing (NLP). Today, the company published a post with multiple highlights, and all the studies presented. Here’s the roundup.
Expand Expanding Close
In a new study co-authored by Apple researchers, an open-source large language model (LLM) saw big performance improvements after being told to check its own work by using one simple productivity trick. Here are the details.
Expand Expanding Close
Apple researchers have developed an adapted version of the SlowFast-LLaVA model that beats larger models at long-form video analysis and understanding. Here’s what that means.
Expand Expanding Close
A few months ago, Apple hosted the Workshop on Privacy-Preserving Machine Learning, which featured presentations and discussions on privacy, security, and other key areas in responsible machine learning development. Now, it has made the presentations public. Here are three highlights.
Expand Expanding Close
In a new study, a group of Apple researchers describe a very interesting approach they took to, basically, get an open-source model to teach itself how to build good user interface code in SwiftUI. Here’s how they did it.
Expand Expanding Close
A new research paper from Apple details a technique that speeds up large language model responses, while preserving output quality. Here are the details.
Expand Expanding Close
Today, Apple published on its Machine Learning Research blog, select recordings from its 2024 Workshop on Human-Centered Machine Learning (HCML), highlighting its work on responsible AI development.
Expand Expanding Close
A new Apple-backed study, in collaboration with Aalto University in Finland, introduces ILuvUI: a vision-language model trained to understand mobile app interfaces from screenshots and from natural language conversations. Here’s what that means, and how they did it.
Expand Expanding Close