Home / Technology / Alexa and Google Home were abused to obtain passwords and phish passwords

Alexa and Google Home were abused to obtain passwords and phish passwords



  Changed image shows human ears sprouting from the Amazon device.

Currently, the privacy threats that Amazon Alexa and Google Home create are common knowledge. Workers for both companies routinely listen to user sounds – recordings that can be kept forever – and the sounds that the devices capture can be used in criminal litigation.

Now there is a new concern: malicious apps developed by third parties and hosted by Amazon or Google. The threat is not just theoretical. Whitehat hackers at Germany's security research labs developed eight apps ̵

1; four Alexa skills "and four Google Home" actions – all of which passed through Amazon or Google's security testing processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, Behind the scenes, these "smart spies", as the researchers call them, dropped their eyes on users and fished for their passwords.

"It was always clear that these voice assistants have the consequences of privacy – with Google and Amazon as got your speech, and this may be triggered in the event of an accident, ”said Fabian Bräunlein, senior security consultant at SRLabs. "We are now showing that not only the manufacturers but … even hackers can abuse these voice assistants for infringing on someone's integrity."

The malicious apps had different names and slightly different ways of working, but they all followed similar streams. A user would say a phrase like, "Hi Alexa, ask My Lucky Horoscope give me the horoscope for the Taurus" or "OK ​​Google, ask My Lucky Horoscope give me the horoscope for the Taurus." The interception apps responded with the requested information while the phishing apps gave a false error message. Then the apps gave the impression that they were no longer running when they were actually quietly waiting for the next phase of the attack.

As the following two videos show, the listening apps provided the expected answers and then silenced. In one case, an app became silent as the task was completed, and in another case, an app dropped because the user gave the "stop" command, which Alexa uses to quit apps. But the apps silently logged all conversations within hearing shot from the device and sent a copy to a developer-designed server.

Google Home Eavesdropping.

Amazon Alexa Eavesdropping.

The phishing apps follow a slightly different path by responding with an error message stating that the skill or action is not available in that user's country. They then silence to give the impression that the app is no longer running. After about a minute, the apps use a voice that mimics those used by Alexa and Google Home to incorrectly claim that a device update is available and asks the user for a password to install.

Google Home Phishing. [19659011] Amazon Alexa Phishing.

SRLabs eventually took down all four demo apps. Recently, the researchers developed four German-language apps that worked in a similar way. All eight of them passed inspection by Amazon and Google. The four newer ones were only taken down after researchers privately reported their results to Amazon and Google. As with most skills and actions, users did not need to download anything. Simply saying the right phrases in one unit was enough for the apps to run.

All malicious apps used common building blocks to mask their malicious behaviors. The first was to exploit a deficiency in both Alexa and Google Home when their text-to-speech engines were instructed to speak the character "." (U + D801, point, space). The unpredictable sequence caused both devices to silence even when the apps were still running. The silence gave the impression that the apps had ended, even as they continued to run.

The apps used other tricks to trick users. Speaking of voice apps, "Hey Alexa" and "OK Google" are known as "wake up" words that activate the devices; "My Lucky Horoscope" is an "invocation" phrase used to start a certain skill or action; "give me the horoscope" is an "intention" that tells the app what function to call; and "taurus" is a "slot" value that acts as a variable. After the apps got their first approval, the developers manipulated SRLab's intentions as "stop" and "start" to give them new features that made the apps listen and log conversations.

Others at SRLabs who worked on the project include security researcher Luise Frerichs and Karsten Nohl, the company's principal researcher. In a post documenting the apps, the researchers explained how they developed Alexa phishing skills:

1. Create a seemingly innocent skill that already contains two intentions:
– an intention started by "stop" and copy the stop intention
– an intention started by a certain, commonly used word and save the following words as track values. This intention appears as the fallback intention.

2nd After Amazon's review, change the first intention to say goodbye, but then keep the session open and extend the silence by adding the character sequence "(U + D801, dot, space)" several times to the speech promo.

3. Change the second intention not to respond at all

As the user now tries to finish the skill, they hear a goodbye message, but the skill continues to run for several seconds. If the user starts a sentence that starts with the selected word during this time, the intention will save the sentence as trace values ​​and send them to the attacker.

To develop Google Home interception measures:

1. Create an action and submit it for review.

2nd After review, change the main thesis to end with the Bye earcon sound (by playing a recording using Speech Synthesis Markup Language (SSML)) and set Expect Response to true. This sound is usually understood as signaling that a voice tap is complete. Then add multiple noInputPrompts that consist of only a brief silence using the SSML element or the unpronounceable Unicode character sequence "."

3rd Create a second intention called when an action.intent.TEXT request is received. This intention outputs a short silence and defines several silent noInputPrompts.

After leaving the requested information and playing the headphones, the Google Home device waits for about 9 seconds for voice input. If no one is detected, the unit "ejects a short silence and waits again for user input. If no number is detected within three iterations, the action stops.

When voice input is detected, a second intention is called. This intention consists only of a silent output, again with several silently repeated texts. Each time a number is detected, this intention is called and the reset number is reset.

The hacker receives a complete transcript of the user's subsequent conversations, until there is at least a 30-second interruption of detected speech. (This can be extended by extending the silence period during which interception is paused.)

In this mode, the Google Home Device will also forward all commands preset by "OK Google" (except "stop") to the hacker. Therefore, the hacker can also use this hack to mimic other applications, man-in-the-middle user interaction with the counterfeit measures and launch credible phishing attacks.

SRLabs reported private results on their research to Amazon and Google. In response, both companies removed the apps and said they are changing their approval processes to prevent skills and actions from having similar functions in the future. In a statement, Amazon representatives provided the following statement and FAQ (emphasis added for clarity):

Customer confidence is important to us and we conduct security audits as part of the skill certification process. We quickly blocked the skill in question and set restrictions to prevent and detect this type of skill behavior and reject or remove it when identified.

In the post Q&A:

1) Why is it possible for the skill created by the researchers to get a rough copy of what a customer says after they say "stop" to the skill?

This is no longer possible for skills submitted for certification. We have introduced restrictions to prevent and detect this type of skill behavior and to reject or remove it when identified.

2) Why is it possible for SR Labs to ask skill users to install a fake security update and then ask them to enter a password?

We have introduced restrictions to prevent and detect this type of skill behavior and reject or remove it when identified. This includes preventing skills from asking customers for their Amazon passwords.

It is also important for customers to know that we provide automatic security updates for our devices and will never ask them to share their passwords.

Google Representatives meanwhile wrote:

All actions on Google must comply with our developer policies, and we prohibit and remove all actions that violate these policies. We have reviewed processes to detect the type of behavior described in this report, and we have removed the measures we found from these researchers. We are putting in place additional mechanisms to prevent these problems from arising in the future.

Google did not say what these additional mechanisms are. Against this backdrop, a company representative says a review of all third-party actions available from Google, during which time some may be paused. When the review is complete, measures that have been re-launched will become available.

It is encouraging that Amazon and Google have removed the apps and are stepping up their review processes to prevent similar apps from becoming available. But SRLab's success raises serious concerns. Google Play has a long history of hosting malicious apps that run sophisticated surveillance software – in at least one case, researchers said, so the Egyptian government could spy on its own citizens. Other malicious Google Play apps have stolen users' cryptocurrency and carried secret payloads. These types of apps have routinely slipped through Google's testing process for several years.

There is little or no evidence that third-party apps are actively threatening Alexa and Google Home users now, but the research at SRLabs suggests that the opportunity is not far-reaching. I have long been convinced that the risks with Alexa, Google Home and other apps that always listen outweigh their benefits. SRLab's Smart Spies research only increases my view that most people should not trust these devices.


Source link