We’ve known since the spring of last year that Amazon Alexa and Google Home smart speakers can eavesdrop on owners, and even phish them via voice. However, new research shows that new malicious apps with these capabilities continue to be approved by both companies.
The two vulnerabilities, demonstrated in videos below, occur because both companies make their speakers smarter by allowing third-party developers to create apps or “skills” for them. Apple’s HomePod is safe because the company doesn’t allow this type of third-party access…
ZDNet reports on the latest examples.
Both Amazon and Google have deployed countermeasures every time, yet newer ways to exploit smart assistants have continued to surface.
The latest ones were disclosed today, after being identified earlier this year by Luise Frerichs and Fabian Bräunlein, two security researchers at Security Research Labs (SRLabs), who shared their findings with ZDNet last week.
Both the phishing and eavesdropping vectors are exploitable via the backend that Amazon and Google provide to developers of Alexa or Google Home custom apps.
These backends provide access to functions that developers can use to customize the commands to which a smart assistant responds, and the way the assistant replies.
The way third-party apps should work is that the microphones are active for only a short time after the smart speaker asks the user a question. For example, if I tell Alexa to ask my supermarket app to add something to the basket, the app will check my order history for the exact product details, then Alexa will tell me what it found and ask me to confirm that’s what I want. It will then activate the Echo Dot’s microphone for a short time while it waits for me to say yes or no. If I don’t reply within a few seconds, the microphone is switched off again.
However, malicious apps can leave the microphone activated — and recording what it hears — for much longer. It’s achieved by using a special string that creates a lengthy pause after a question or confirmation, the mic remaining on during this time.
The “�. ” string can also be used […] for eavesdropping attacks. However, this time, the character sequence is used after the malicious app has responded to a user’s command.
The character sequence is used to keep the device active and record a user’s conversation, which is recorded in logs, and sent to an attacker’s server for processing.
In that way, smart speakers can eavesdrop on anything said while the mic is still on.
Alternatively, the long pause can be used to make an owner think they are no longer interacting with the app. At that point, a phishing attempt can be made.
The idea is to tell the user that an app has failed, insert the “�. ” to induce a long pause, and then prompt the user with the phishing message after a few minutes, tricking the target into believing the phishing message has nothing to do with the previous app with which they just interacted.
For example, in the videos below, a horoscope app triggers an error, but then remains active, and eventually asks the user for their Amazon/Google password while faking an update message from Amazon/Google itself.
This type of attack would not be possible on HomePod because the only way a third-party app can interact with Siri is via Apple’s own APIs. Apps have no direct access.
Check out the demo videos below.
FTC: We use income earning auto affiliate links. More.