Apple has acquired Q.ai in a deal estimated around $2 billion. Though there’s still a lot we don’t know, some of the details surrounding Q.ai’s work have me very excited for the future of Apple’s AI and Siri offerings.
Apple’s latest acquisition, Q.ai, specializes in understanding ‘silent’ voice input

Apple acquires companies all the time, but it’s very rare for the company to make a splashy, expensive purchase like Q.ai.
The reported $2 billion purchase price makes Q.ai the second biggest acquisition in Apple’s history. It only trails the $3 billion Beats acquisition over a decade ago.
Yet despite the big price tag, there’s a lot about Q.ai’s work that’s shrouded in mystery.
As 9to5Mac Editor-in-Chief Chance Miller wrote, the company has “built machine-learning tech for audio and ‘silent’ voice input.”
Its website has the tagline: “In a world full of noise we craft a new kind of quiet.”
Israeli technology site Geektime dug into patent details to uncover Q.ai’s work. Here’s the translation:
According to its patent applications, the company appears to be working on reading what is being said, not using voice, but by using optical sensors that detect muscle and skin movements in the face, to translate them into words or commands. Some of the patents indicate the use of a headset that also examines the user’s cheek and jaw, and will apparently allow you to talk to Siri, Apple’s voice assistant, using only lip movements.
This technology is expected to be integrated with smart glasses and/or earbuds such as AirPods.
And if this early reporting is accurate, Apple may be well on its way to solving my biggest AI problem.
Why the Q.ai acquisition could be huge for AI and Siri

Like millions of others around the world, I’ve found my use of AI chatbots increase significantly over the last couple years.
When I have a question about something, I’ll quickly go to ChatGPT and/or Google Gemini, among others.
LLM-based chatbots all have their issues, including giving faulty information at times. But for the most part, I’ve found them tremendously useful time-savers.
When I use these AI chatbots, however, I almost never interact using my voice.
I’m often around other people, whether that’s my family at home or random strangers out on the street or at a coffee shop where I’m working.
As a result, I always type in my AI requests. But using the iOS system keyboard can feel a bit clunky at times and slow me down. It would be so much quicker and easier to just speak my queries, were it not for the social hangup.
But if a future version of Siri can understand facial movements and barely audible whispers, that will unlock a whole new world of AI possibilities.

Whether I’m home, or out and about, I‘ll be able to speak near-silently to Siri and have it understand me. No need to pull out my iPhone first. No need to type in a question.
I can use AI assistance whenever I need it—all while staying engaged with the world around me, and without becoming “that guy” who’s talking out loud to themselves.
There are lots of question marks concerning how this might actually work. I assume AirPods with cameras and/or Apple Glasses will be involved.
If the future of computing involves ever-present AI chatbots, then Q.ai’s tech could be a key part of that.
How do you primarily interact with AI chatbots today? Does Q.ai’s tech sound appealing to you? Let us know in the comments.
Best iPhone accessories
- AirPods Pro 3 (now only $199, down from $249)
- 10-year AirTag battery case 2-pack
- MagSafe Car Mount for iPhone
- HomeKit Garage Door Opener
- 100W USB-C fast charging power adapter
FTC: We use income earning auto affiliate links. More.

Comments