OyaYansa Posted September 7, 2017 Share Posted September 7, 2017 The traditional methods of data theft seem to be outdated for lovers of other people, who are looking for new ways of getting the data. And it seems that it is possible if we consider how easy it is to hack virtual assistants using ultrasonic voice commands. Security researchers in China belonging to the University of Zhejiang have discovered a failure by which using high frequencies inaudible to humans but that are registered in electronic microphones is possible to steal data from virtual assistants. Under a technique called DolphinAttack has been able to for example activate Google Now and have a phone enter plane mode, mani[CENSORED]te the navigation system of an Audi Q3, etc. Up to seven virtual assistants available on the market: Siri (from Apple), Google Now, Alexa (from Amazon), S Voice (from Samsung), Cortana (Microsoft) and HiVoice (Huawei) have been victims. As electronic devices they are, the microphones have a small thin membrane that vibrates in response to changes in air pressure caused by sound waves. Humans have a limit hearing capacity of 20000 Hz, above which we do not hear anything. By default, the microphone software discards any signal above this frequency, although technically still detects them - this is known as low pass filter. Frequency of voice commands made by a human being at ultrasound frequencies A perfect microphone would vibrate at a frequency known in, and only at, certain input frequencies. But in reality, the membrane is subjected to harmonics caused by the waves, which means that, for example, a tone of 400 Hz causes a response at 200 Hz and 800 Hz. However, they tend to be weaker than vibration original.This is what allows us to be able to record a tone at 100 Hz without emitting that sound. We would do this by generating a sufficiently powerful 800 Hz tone, which would result in a 100 Hz tone with its harmonics only in the microphone. In this way, people hear the original tone but are unaware that the device has recorded more data. Based on these scientific principles, the Chinese researchers determined that most of the microphones used in voice activated devices are subject to this harmonic effect. To prove this, they created a target tone with a much higher ultrasonic frequency, which was able to recreate fragments with layers tones of between 500 and 1,000 Hz on the main voice recognition platforms. "DolpinAttack's voice commands, while totally inaudible and therefore imperceptible to humans, can be received by the audio hardware of the devices and correctly understood by voice recognition systems. We have validated DolphinAttack in the major systems voice recognition, including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa, "the researchers said. This is shown in this image, where they explain this peculiar process. They were able to run a series of voice commands ranging from familiar phrases (such as "OK Google") to multiple word commands ("unlocking the back door"). Tests focused on Siri and Apple devices, as well as testing Google Now on two Nexus, S Voice on an S6 Edge, HiVoice on Honor 9, Cortana on a ThinkPad T440p with Windows 10 and Alexa on an Amazon Echo . The results showed that it is possible to hack virtual assistants. But not more than 5 feet away. There are several aspects that lead us to think that alarms are not so much. Firstly, DolphinAttack is so simple to deactivate with simply not having the open speech recognition interface. If this protection measure is forgotten do not worry because many virtual assistants have restricted access to certain functions such as contacts, apps and websites. Third, attackers must be very close to these devices in order to steal the data. Even so, it is advisable not to lower your guard against the use of new methods to hack virtual assistants using ultrasonic voice commands. Link to comment Share on other sites More sharing options...
Recommended Posts