1,571 research outputs found
PerfWeb: How to Violate Web Privacy with Hardware Performance Events
The browser history reveals highly sensitive information about users, such as
financial status, health conditions, or political views. Private browsing modes
and anonymity networks are consequently important tools to preserve the privacy
not only of regular users but in particular of whistleblowers and dissidents.
Yet, in this work we show how a malicious application can infer opened websites
from Google Chrome in Incognito mode and from Tor Browser by exploiting
hardware performance events (HPEs). In particular, we analyze the browsers'
microarchitectural footprint with the help of advanced Machine Learning
techniques: k-th Nearest Neighbors, Decision Trees, Support Vector Machines,
and in contrast to previous literature also Convolutional Neural Networks. We
profile 40 different websites, 30 of the top Alexa sites and 10 whistleblowing
portals, on two machines featuring an Intel and an ARM processor. By monitoring
retired instructions, cache accesses, and bus cycles for at most 5 seconds, we
manage to classify the selected websites with a success rate of up to 86.3%.
The results show that hardware performance events can clearly undermine the
privacy of web users. We therefore propose mitigation strategies that impede
our attacks and still allow legitimate use of HPEs
NoisFre: Noise-Tolerant Memory Fingerprints from Commodity Devices for Security Functions
Building hardware security primitives with on-device memory fingerprints is a
compelling proposition given the ubiquity of memory in electronic devices,
especially for low-end Internet of Things devices for which cryptographic
modules are often unavailable. However, the use of fingerprints in security
functions is challenged by the small, but unpredictable variations in
fingerprint reproductions from the same device due to measurement noise. Our
study formulates a novel and pragmatic approach to achieve highly reliable
fingerprints from device memories. We investigate the transformation of raw
fingerprints into a noise-tolerant space where the generation of fingerprints
is intrinsically highly reliable. We derive formal performance bounds to
support practitioners to easily adopt our methods for applications.
Subsequently, we demonstrate the expressive power of our formalization by using
it to investigate the practicability of extracting noise-tolerant fingerprints
from commodity devices. Together with extensive simulations, we have employed
119 chips from five different manufacturers for extensive experimental
validations. Our results, including an end-to-end implementation demonstration
with a low-cost wearable Bluetooth inertial sensor capable of on-demand and
runtime key generation, show that key generators with failure rates less than
can be efficiently obtained with noise-tolerant fingerprints with a
single fingerprint snapshot to support ease-of-enrollment.Comment: Accepted to IEEE Transactions on Dependable and Secure Computing.
Yansong Gao and Yang Su contributed equally to the study and are co-first
authors in alphabetical orde
Deep Intellectual Property: A Survey
With the widespread application in industrial manufacturing and commercial
services, well-trained deep neural networks (DNNs) are becoming increasingly
valuable and crucial assets due to the tremendous training cost and excellent
generalization performance. These trained models can be utilized by users
without much expert knowledge benefiting from the emerging ''Machine Learning
as a Service'' (MLaaS) paradigm. However, this paradigm also exposes the
expensive models to various potential threats like model stealing and abuse. As
an urgent requirement to defend against these threats, Deep Intellectual
Property (DeepIP), to protect private training data, painstakingly-tuned
hyperparameters, or costly learned model weights, has been the consensus of
both industry and academia. To this end, numerous approaches have been proposed
to achieve this goal in recent years, especially to prevent or discover model
stealing and unauthorized redistribution. Given this period of rapid evolution,
the goal of this paper is to provide a comprehensive survey of the recent
achievements in this field. More than 190 research contributions are included
in this survey, covering many aspects of Deep IP Protection:
challenges/threats, invasive solutions (watermarking), non-invasive solutions
(fingerprinting), evaluation metrics, and performance. We finish the survey by
identifying promising directions for future research.Comment: 38 pages, 12 figure
Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation
Side-channel attacks that use machine learning (ML) for signal analysis have
become prominent threats to computer security, as ML models easily find
patterns in signals. To address this problem, this paper explores using
Adversarial Machine Learning (AML) methods as a defense at the computer
architecture layer to obfuscate side channels. We call this approach Defensive
ML, and the generator to obfuscate signals, defender. Defensive ML is a
workflow to design, implement, train, and deploy defenders for different
environments. First, we design a defender architecture given the physical
characteristics and hardware constraints of the side-channel. Next, we use our
DefenderGAN structure to train the defender. Finally, we apply defensive ML to
thwart two side-channel attacks: one based on memory contention and the other
on application power. The former uses a hardware defender with ns-level
response time that attains a high level of security with half the performance
impact of a traditional scheme; the latter uses a software defender with
ms-level response time that provides better security than a traditional scheme
with only 70% of its power overhead.Comment: Preprint. Under revie
- …