Skip to main content

Apple’s new ‘Personal Voice’ feature can create a voice that sounds like you or a loved one in just 15 minutes

As part of its preview of iOS 17 accessibility updates coming this year, Apple announced a pair of new features called Live Speech and Personal Voice. Live Speech allows users to type what they want to say and have it be spoken out.

Personal Voice, on the other hand, is a way for people who are at risk of losing their ability to speak to create and save a voice that sounds like them. Apple says it’s designed for people at risk of losing their ability to speak, such as those with a recent diagnosis of ALS.

With the first beta of iOS 17 now available, you can now try out Personal Voice for yourself.

Here’s how Apple describes the new Live Speech feature coming later this year:

With Live Speech on iPhone, iPad, and Mac, users can type what they want to say to have it be spoken out loud during phone and FaceTime calls as well as in-person conversations. Users can also save commonly used phrases to chime in quickly during lively conversation with family, friends, and colleagues. Live Speech has been designed to support millions of people globally who are unable to speak or who have lost their speech over time.

Building on Live Speech is something Apple calls Personal Voice; this is an incredibly powerful feature that Apple says is designed for users at risk of losing their ability to speak. This includes people with a recent diagnosis of ALS (amyotrophic lateral sclerosis), which is a disease that progressively impacts speaking ability over time.

Using Personal Voice, users will be prompted to read along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. Using on-device machine learning, the iPhone or iPad can then create a voice that sounds like them.

This voice feature then integrates with Live Speech, so users can speak with their Personal Voice in FaceTime calls and during in-person conversations.

Apple’s announcement:

For users at risk of losing their ability to speak — such as those with a recent diagnosis of ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability — Personal Voice is a simple and secure way to create a voice that sounds like them. 

Users can create a Personal Voice by reading along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. This speech accessibility feature uses on-device machine learning to keep users’ information private and secure, and integrates seamlessly with Live Speech so users can speak with their Personal Voice when connecting with loved ones.

Essentially, what this feature will do is give people the ability to create their synthetic voice on their iPhone by just reading through Apple’s pre-crafted prompts. Philip Green, who was diagnosed with ALS in 2018 and is a board member and advocate at the Team Gleason nonprofit, praised Apple’s efforts in a statement on Tuesday:

At the end of the day, the most important thing is being able to communicate with friends and family,” said Philip Green, board member and ALS advocate at the Team Gleason nonprofit, who has experienced significant changes to his voice since receiving his ALS diagnosis in 2018. “If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world — and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary.

Apple says that these new accessibility features will start rolling out later this year. In addition to Live Speech and Personal Voice, Apple has announced a number of other new accessibility features as well.

9to5Mac’s Take

Apple has always been a leader in accessibility features, and today’s announcements are just the latest example of that. But more so than ever before, these features resonate with me.

My mom passed away in December after a short seven-month battle with ALS. Her voice was one of the first things she lost. In fact, by the time she was actually formally diagnosed with ALS, her voice was already mostly gone.

Just reading this press release moved me to tears. The Personal Voice feature gives me hope that people with ALS and other speech-impacting conditions might suffer ever-so-slightly less. I wish this had been a feature when our mom was here, but I’m thrilled it’s something on the horizon for others.

Top comment by Blorft

Liked by 25 people

My deepest sympathies to you and your family, Chance.

My grandpa was diagnosed with Parkinson's several years ago, and while the symptoms have developed much more slowly than ALS, he has gradually lost the ability to speak clearly or coherently even when it's evident that he has coherent thoughts. It's caused him a lot of frustration and even lead to bouts of depression, as he feels he's losing the basic ability to express himself to those he cares about.

It's easy for many of us to think that accessibility features only apply to other people, but it's worth remembering - for many of us, it's not a question of if we will eventually need accessibility features, but when we will need them. Whether it's a loss of vision, hearing, speech, mobility, or some other faculty we take for granted, time comes for all of us and we'll all likely need to use some of these features at some point in our lives. Apple deserves a lot of credit for the work they do in pushing boundaries on accessibility.

View all comments

I’d even go as far as to say that I think everyone should spend 15 minutes setting up the Personal Voice feature once it’s available. As my sisters and I learned with our mom, your ability to speak can be taken away in a matter of weeks, and it might be too late at that point to set up something like Personal Voice.

While there are certainly some questions and specific details I’m waiting on Apple to answer about this feature, if there is one company I trust to get something like Personal Voice right, it’s Apple. Unlike other voice synthesis tools on the market, which require you to upload sample data of your voice, Personal Voice is doing everything entirely on-device. There is no cloud processing whatsoever. Users will be able to opt-in to syncing to other devices using end-to-end iCloud encryption.

Coincidentally, May happens to be ALS Awareness Month. I implore you to learn more about it via the ALS Association’s website or via Team Gleason’s website.

Follow ChanceTwitterInstagram, and Mastodon

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Chance Miller Chance Miller

Chance is an editor for the entire 9to5 network and covers the latest Apple news for 9to5Mac.

Tips, questions, typos to chance@9to5mac.com

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing