The researchers at Pennsylvania State University discovered this significant security risk by combining an off-the-shelf automotive radar sensor with a novel processing approach.
Researchers demonstrated a method for detecting vibrations in a mobile phone's earpiece and deciphering what the person on the other end of the phone was saying with up to 83 per cent of accuracy.
The researchers at Pennsylvania State University discovered this significant security risk using an off-the-shelf automotive radar sensor and a novel processing approach.
"As technology becomes more reliable and robust over time," said Suryoday Basak, a doctoral candidate at Penn State, "adversaries are more likely to misuse such sensing technologies."
"Our demonstration of this type of exploitation adds to the body of scientific literature that essentially says, 'Hey! Audio can be intercepted using automotive radars. We need to take action on this," Basak said.
The radar operates in the millimetre-wave (mmWave) spectrum, specifically in the bands 60 to 64 gigahertz and 77 to 81 gigahertz, prompting the researchers to coin the term "mmSpy."
This is a subset of the radio spectrum used by 5G, the fifth-generation standard for global communication systems.
The researchers simulated people speaking through the earpiece of a smartphone in the mmSpy demonstration described in the 2022 IEEE Symposium on Security and Privacy (SP).
The phone's earpiece vibrates as a result of the speech, and the vibration spreads throughout the phone.
"We use the radar to detect this vibration and reconstruct what the person on the other end of the line said," Basak explained.
The researchers, including Penn State assistant professor Mahanth Gowda, noted that their method works even when the audio is entirely inaudible for humans and nearby microphones.
"This isn't the first time similar vulnerabilities or attack modalities have been discovered," Basak said, adding that this particular aspect, detecting and reconstructing speech from the other end of a smartphone line, had not yet been explored.
Pre-processing of radar sensor data is performed using MATLAB and Python modules, which are computing platform-language interfaces used to remove hardware-related and artefact noise from the data.
The data is then fed into machine learning modules trained to classify speech and reconstruct audio.
The processed speech is 83 per cent accurate when the radar detects vibrations from a foot away. They claim that as the radar moves away from the phone, the accuracy drops to 43 per cent at six feet.
Basak explained that once the speech has been reconstructed, the researchers can filter, enhance, or classify keywords. The team is refining their approach to understand better how to protect against this security flaw and exploit it for good.
The technology we developed can also be used to find vibrations in smart home systems, industrial machinery, and building monitoring systems, according to Basak.
According to the researchers, similar home maintenance or health monitoring systems could benefit from such sensitive tracking.
"Imagine a radar that could track a user and alert authorities if some health parameter changes in a dangerous way," Basak explained.
According to Basak, using radars in smart homes and industries can enable a faster turnaround when faults and concerns are discovered with the proper target actions.
(With inputs from PTI)
Also Read: We have been able to signal our intent and state of readiness along LAC: IAF chief
Also Read: Army deploys radar that tracks avalanches within 3 seconds of trigger in Sikkim
Also Read: DRDO, Indian Navy successfully test Vertical Launch Short Range missile in Odisha