34 research outputs found

    Phantom of the ADAS: Phantom Attacks on Driver-Assistance Systems

    Get PDF
    The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them

    (Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs

    Full text link
    We demonstrate how images and sounds can be used for indirect prompt and instruction injection in multi-modal LLMs. An attacker generates an adversarial perturbation corresponding to the prompt and blends it into an image or audio recording. When the user asks the (unmodified, benign) model about the perturbed image or audio, the perturbation steers the model to output the attacker-chosen text and/or make the subsequent dialog follow the attacker's instruction. We illustrate this attack with several proof-of-concept examples targeting LLaVa and PandaGPT

    Botnet IND: About Botnets of Botless IoT Devices

    Get PDF
    Recent studies and incidents have shed light on the threat posed by botnets consisting of a large set of relatively weak IoT devices that host an army of bots. However, little is known about the threat posed by a small set of devices that are not infected with malware and do not host bots. In this paper, we present Botnet-IND (indirect), a new type of distributed attack which is launched by a botnet consisting of botless IoT devices. In order to demonstrate the feasibility of Botnet-IND on commercial, off-the-shelf IoT devices, we present Piping Botnet, an implementation of Botnet-IND on smart irrigation systems, a relatively new type of IoT device which is used by both the private and public sector to save water; such systems will likely replace all traditional irrigation systems in the next few years. We perform a security analysis of three of the five most sold commercial smart irrigation systems (GreenIQ, BlueSpray, and RainMachine). Our experiments demonstrate how attackers can trick such irrigation systems (Wi-Fi and cellular) without the need to compromise them with malware or bots. We show that in contrast to traditional botnets that require a large set of infected IoT devices to cause great harm, Piping Botnet can pose a severe threat to urban water services using a relatively small set of smart irrigation systems. We found that only 1,300 systems were required to drain a floodwater reservoir when they are maliciously pro

    The Age of Testifying Wearable Devices: The Case of Intoxication Detection

    Get PDF
    Seven years ago, a famous case in which data from a Fitbit tracker was used in the courtroom in a personal injury case heralded a new age: the age of testifying wearable devices. Prior to that, data from wearable devices was used in various areas, including medicine, advertising, and scientific research, but the use of such data in the Fitbit case attracted the interest of a new sector: the legal sector. Since then, lawyers, investigators, detectives, and police officers have used data from pacemakers and smartwatches in order to prove/disprove allegations regarding wearable device owners in several well-known cases (sexual assault, arson, personal injury, etc.). In this paper, we discuss testifying wearable devices. We explain the advantages of wearable devices over traditional IoT devices in the legal setting, the parties involved in cases in which a wearable device was used to testify against/for the device owner, and the information flow. We then focus on an interesting area of research: intoxication detection. We explain the motivation to detect whether a subject was intoxicated and explain the primary scientific gap in this area. In order to overcome this gap, we suggest a new method for detecting whether a subject was intoxicated based on free gait data obtained from a wearable device. We evaluate the performance of the proposed method in a user study involving 30 subjects and show that motion sensor data obtained from a smartphone and fitness tracker from eight seconds of free gait can indicate whether a subject is/was intoxicated (obtaining an AUC of 0.97) and thus be used as testimony. Finally, we analyze the current state and the near future of the age of testifying wearable devices and explain why we believe that (1) we are still at the beginning of this age despite the fact that seven years has passed since the original court case, and (2) the number of cases in which wearable device data is used to testify for/against the device owner is expected to increase significantly in the next few years

    Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models

    Full text link
    In recent years, various watermarking methods were suggested to detect computer vision models obtained illegitimately from their owners, however they fail to demonstrate satisfactory robustness against model extraction attacks. In this paper, we present an adaptive framework to watermark a protected model, leveraging the unique behavior present in the model due to a unique random seed initialized during the model training. This watermark is used to detect extracted models, which have the same unique behavior, indicating an unauthorized usage of the protected model's intellectual property (IP). First, we show how an initial seed for random number generation as part of model training produces distinct characteristics in the model's decision boundaries, which are inherited by extracted models and present in their decision boundaries, but aren't present in non-extracted models trained on the same data-set with a different seed. Based on our findings, we suggest the Robust Adaptive Watermarking (RAW) Framework, which utilizes the unique behavior present in the protected and extracted models to generate a watermark key-set and verification model. We show that the framework is robust to (1) unseen model extraction attacks, and (2) extracted models which undergo a blurring method (e.g., weight pruning). We evaluate the framework's robustness against a naive attacker (unaware that the model is watermarked), and an informed attacker (who employs blurring strategies to remove watermarked behavior from an extracted model), and achieve outstanding (i.e., >0.9) AUC values. Finally, we show that the framework is robust to model extraction attacks with different structure and/or architecture than the protected model.Comment: 9 pages, 6 figures, 3 table

    The Little Seal Bug: Optical Sound Recovery from Lightweight Reflective Objects

    Get PDF
    In this paper, we introduce the little seal bug attack, an optical side-channel attack which exploits lightweight reflective objects (e.g., an iced coffee can, a smartphone stand, a souvenir) as optical implants for the purpose of recovering the content of a conversation. We show how fluctuations in the air pressure on the surface of a shiny object can be exploited by eavesdroppers to recover speech passively and externally, using equipment not likely to be associated with spying. These air pressure fluctuations, which occur in response to sound, cause the shiny object to vibrate and reflect light which modulates the nearby sound; as a result, seemingly innocuous objects like an empty beverage can, desk ornament, or smartphone stand, which are often placed on desks, can provide the infrastructure required for eavesdroppers to recover the content of a victim’s conversation held when the victim is sitting at his/her desk. First, we conduct a series of experiments aimed at learning the characteristics of optical measurements obtained from shiny objects that reflect light, by using a photodiode to analyze the movement of a shiny weight in response to sound. Based on our findings, we propose an optical acoustical transformation (OAT) to recover speech from the optical measurements obtained from light reflected from shiny objects. Finally, we compare the performance of the little seal bug attack to related methods presented in other studies. We show that eavesdroppers located 35 meters away from a victim can use the little seal bug attack to recover speech at the sound level of a virtual meeting with fair intelligibility wh

    Lamphone: Real-Time Passive Sound Recovery from Light Bulb Vibrations

    Get PDF
    Recent studies have suggested various side-channel attacks for eavesdropping sound by analyzing the side effects of sound waves on nearby objects (e.g., a bag of chips and window) and devices (e.g., motion sensors). These methods pose a great threat to privacy, however they are limited in one of the following ways: they (1) cannot be applied in real time (e.g., Visual Microphone), (2) are not external, requiring the attacker to compromise a device with malware (e.g., Gyrophone), or (3) are not passive, requiring the attacker to direct a laser beam at an object (e.g., laser microphone). In this paper, we introduce Lamphone, a novel side-channel attack for eavesdropping sound; this attack is performed by using a remote electro-optical sensor to analyze a hanging light bulb’s frequency response to sound. We show how fluctuations in the air pressure on the surface of the hanging bulb (in response to sound), which cause the bulb to vibrate very slightly (a millidegree vibration), can be exploited by eavesdroppers to recover speech and singing, passively, externally, and in real time. We analyze a hanging bulb’s response to sound via an electro-optical sensor and learn how to isolate the audio signal from the optical signal. Based on our analysis, we develop an algorithm to recover sound from the optical measurements obtained from the vibrations of a light bulb and captured by the electro-optical sensor. We evaluate Lamphone’s performance in a realistic setup and show that Lamphone can be used by eavesdroppers to recover human speech (which can be accurately identified by the Google Cloud Speech API) and singing (which can be accurately identified by Shazam and SoundHound) from a bridge located 25 meters away from the target room containing the hanging light bulb
    corecore