A group of students from Berkeley have demonstrated how malicious commands to Siri, Google Assistant and Alexa can be hidden in recorded music or innocuous-sounding speech.
Simply playing the tracks over the radio, streaming music track or podcast could allow attackers to take control of a smart home …
The NY Times reports that it builds on research that began in 2016.
Over the past two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online — simply with music playing over the radio.
The 2016 research demonstrated commands hidden in white noise, but the students have this month managed to do the same thing in music and spoken text.
By making slight changes to audio files, researchers were able to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear […]
They were able to hide the command, “O.K. Google, browse to evil.com” in a recording of the spoken phrase, “Without the data set, the article is useless.” Humans cannot discern the command.
The Berkeley group also embedded the command in music files, including a four-second clip from Verdi’s “Requiem.”
Similar techniques have been demonstrated using ultrasonic frequencies.
Researchers at Princeton University and China’s Zhejiang University demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear. The attack first muted the phone so the owner wouldn’t hear the system’s responses, either.
The Berkeley researchers say that there is no indication of the attack method being used in the wild, but that could easily change.
Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors, [said that] while there was no evidence that these techniques have left the lab, it may only be a matter of time before someone starts exploiting them. “My assumption is that the malicious people already employ people to do what I do,” he said.
Apple said that Siri has protections in place that limit the opportunities to execute this type of attack.
Apple said its smart speaker, HomePod, is designed to prevent commands from doing things like unlocking doors, and it noted that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.
Check out 9to5Mac on YouTube for more Apple news:
FTC: We use income earning auto affiliate links. More.
Comments