6 research outputs found

    Botnet IND: About Botnets of Botless IoT Devices

    Get PDF
    Recent studies and incidents have shed light on the threat posed by botnets consisting of a large set of relatively weak IoT devices that host an army of bots. However, little is known about the threat posed by a small set of devices that are not infected with malware and do not host bots. In this paper, we present Botnet-IND (indirect), a new type of distributed attack which is launched by a botnet consisting of botless IoT devices. In order to demonstrate the feasibility of Botnet-IND on commercial, off-the-shelf IoT devices, we present Piping Botnet, an implementation of Botnet-IND on smart irrigation systems, a relatively new type of IoT device which is used by both the private and public sector to save water; such systems will likely replace all traditional irrigation systems in the next few years. We perform a security analysis of three of the five most sold commercial smart irrigation systems (GreenIQ, BlueSpray, and RainMachine). Our experiments demonstrate how attackers can trick such irrigation systems (Wi-Fi and cellular) without the need to compromise them with malware or bots. We show that in contrast to traditional botnets that require a large set of infected IoT devices to cause great harm, Piping Botnet can pose a severe threat to urban water services using a relatively small set of smart irrigation systems. We found that only 1,300 systems were required to drain a floodwater reservoir when they are maliciously pro

    Phantom of the ADAS: Phantom Attacks on Driver-Assistance Systems

    Get PDF
    The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them

    The Adversarial Implications of Variable-Time Inference

    Full text link
    Machine learning (ML) models are known to be vulnerable to a number of attacks that target the integrity of their predictions or the privacy of their training data. To carry out these attacks, a black-box adversary must typically possess the ability to query the model and observe its outputs (e.g., labels). In this work, we demonstrate, for the first time, the ability to enhance such decision-based attacks. To accomplish this, we present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack. The leakage of inference-state elements into algorithmic timing side channels has never been studied before, and we have found that it can contain rich information that facilitates superior timing attacks that significantly outperform attacks based solely on label outputs. In a case study, we investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors. In our examination of the timing side-channel vulnerabilities associated with this algorithm, we identified the potential to enhance decision-based attacks. We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference. Our experiments show that our adversarial examples exhibit superior perturbation quality compared to a decision-based attack. In addition, we present a new threat model in which dataset inference based solely on timing leakage is performed. To address the timing leakage vulnerability inherent in the NMS algorithm, we explore the potential and limitations of implementing constant-time inference passes as a mitigation strategy

    Optical Cryptanalysis: Recovering Cryptographic Keys from Power LED Light Fluctuations

    Get PDF
    Although power LEDs have been integrated in various devices that perform cryptographic operations for decades, the cryptanalysis risk they pose has not yet been investigated. In this paper, we present optical cryptanalysis, a new form of cryptanalytic side-channel attack, in which secret keys are extracted by using a photodiode to measure the light emitted by a device’s power LED and analyzing subtle fluctuations in the light intensity during cryptographic operations. We analyze the optical leakage of power LEDs of various consumer devices and the factors that affect the optical SNR. We then demonstrate end-to-end optical cryptanalytic attacks against a range of consumer devices (smartphone, smartcard, and Raspberry Pi, along with their USB peripherals) and recover secret keys (RSA, ECDSA, SIKE) from prior and recent versions of popular cryptographic libraries (GnuPG, Libgcrypt, PQCrypto-SIDH) from a maximum distance of 25 meter

    Video-Based Cryptanalysis: Extracting Cryptographic Keys from Video Footage of a Device’s Power LED

    Get PDF
    In this paper, we present video-based cryptanalysis, a new method used to recover secret keys from a device by analyzing video footage of a device’s power LED. We show that cryptographic computations performed by the CPU change the power consumption of the device which affects the brightness of the device’s power LED. Based on this observation, we show how attackers can exploit commercial video cameras (e.g., an iPhone 13’s camera or Internet-connected security camera) to recover secret keys from devices. This is done by obtaining video footage of a device’s power LED (in which the frame is filled with the power LED) and exploiting the video camera’s rolling shutter to increase the sampling rate by three orders of magnitude from the FPS rate (60 measurements per second) to the rolling shutter speed (60K measurements per second in the iPhone 13 Pro Max). The frames of the video footage of the device’s power LED are analyzed in the RGB space, and the associated RGB values are used to recover the secret key by inducing the power consumption of the device from the RGB values. We demonstrate the application of video-based cryptanalysis by performing two side-channel cryptanalytic timing attacks and recover: (1) a 256- bit ECDSA key from a smart card by analyzing video footage of the power LED of a smart card reader via a hijacked Internet-connected security camera located 16 meters away from the smart card reader, and (2) a 378-bit SIKE key from a Samsung Galaxy S8 by analyzing video footage of the power LED of Logitech Z120 USB speakers that were connected to the same USB hub (that was used to charge the Galaxy S8) via an iPhone 13 Pro Max. Finally, we discuss countermeasures, limitations, and the future of video-based cryptanalysis in light of the expected improvements in video cameras’ specifications
    corecore