7,954 research outputs found
Undermining User Privacy on Mobile Devices Using AI
Over the past years, literature has shown that attacks exploiting the
microarchitecture of modern processors pose a serious threat to the privacy of
mobile phone users. This is because applications leave distinct footprints in
the processor, which can be used by malware to infer user activities. In this
work, we show that these inference attacks are considerably more practical when
combined with advanced AI techniques. In particular, we focus on profiling the
activity in the last-level cache (LLC) of ARM processors. We employ a simple
Prime+Probe based monitoring technique to obtain cache traces, which we
classify with Deep Learning methods including Convolutional Neural Networks. We
demonstrate our approach on an off-the-shelf Android phone by launching a
successful attack from an unprivileged, zeropermission App in well under a
minute. The App thereby detects running applications with an accuracy of 98%
and reveals opened websites and streaming videos by monitoring the LLC for at
most 6 seconds. This is possible, since Deep Learning compensates measurement
disturbances stemming from the inherently noisy LLC monitoring and unfavorable
cache characteristics such as random line replacement policies. In summary, our
results show that thanks to advanced AI techniques, inference attacks are
becoming alarmingly easy to implement and execute in practice. This once more
calls for countermeasures that confine microarchitectural leakage and protect
mobile phone applications, especially those valuing the privacy of their users
CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information
Machine learning has become mainstream across industries. Numerous examples
proved the validity of it for security applications. In this work, we
investigate how to reverse engineer a neural network by using only power
side-channel information. To this end, we consider a multilayer perceptron as
the machine learning architecture of choice and assume a non-invasive and
eavesdropping attacker capable of measuring only passive side-channel leakages
like power consumption, electromagnetic radiation, and reaction time.
We conduct all experiments on real data and common neural net architectures
in order to properly assess the applicability and extendability of those
attacks. Practical results are shown on an ARM CORTEX-M3 microcontroller. Our
experiments show that the side-channel attacker is capable of obtaining the
following information: the activation functions used in the architecture, the
number of layers and neurons in the layers, the number of output classes, and
weights in the neural network. Thus, the attacker can effectively reverse
engineer the network using side-channel information.
Next, we show that once the attacker has the knowledge about the neural
network architecture, he/she could also recover the inputs to the network with
only a single-shot measurement. Finally, we discuss several mitigations one
could use to thwart such attacks.Comment: 15 pages, 16 figure
Improved Study of Side-Channel Attacks Using Recurrent Neural Networks
Differential power analysis attacks are special kinds of side-channel attacks where power traces are considered as the side-channel information to launch the attack. These attacks are threatening and significant security issues for modern cryptographic devices such as smart cards, and Point of Sale (POS) machine; because after careful analysis of the power traces, the attacker can break any secured encryption algorithm and can steal sensitive information.
In our work, we study differential power analysis attack using two popular neural networks: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN). Our work seeks to answer three research questions(RQs):
RQ1: Is it possible to predict the unknown cryptographic algorithm using neural network models from different datasets?
RQ2: Is it possible to map the key value for the specific plaintext-ciphertext pair with or without side-band information?
RQ3: Using similar hyper-parameters, can we evaluate the performance of two neural network models (CNN vs. RNN)?
In answering the questions, we have worked with two different datasets: one is a physical dataset (DPA contest v1 dataset), and the other one is simulated dataset (toggle count quantity) from Verilog HDL. We have evaluated the efficiency of CNN and RNN models in predicting the unknown cryptographic algorithms of the device under attack. We have mapped to 56 bits key for a specific plaintext-ciphertext pair with and without using side-band information. Finally, we have evaluated vi our neural network models using different metrics such as accuracy, loss, baselines, epochs, speed of operation, memory space consumed, and so on. We have shown the performance comparison between RNN and CNN on different datasets. We have done three experiments and shown our results on these three experiments. The first two experiments have shown the advantages of choosing CNN over RNN while working with side-channel datasets. In the third experiment, we have compared two RNN models on the same datasets but different dimensions of the datasets
Profiled Deep Learning Side-Channel Attack on a Protected Arbiter PUF Combined with Bitstream Modification
In this paper we show that deep learning can be used to identify the shape of power traces corresponding to the responses of a protected arbiter PUF implemented in FPGAs. To achieve that, we combine power analysis with bitstream modification. We train a CNN classifier on two 28nm XC7 FPGAs implementing 128-stage arbiter PUFs and then classify the responses of PUFs from two other FPGAs.
We demonstrate that it is possible to reduce the number of traces required for a successful attack to a single trace by modifying the bitstream to replicate PUF responses
Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator
DNN accelerators have been widely deployed in many scenarios to speed up the
inference process and reduce the energy consumption. One big concern about the
usage of the accelerators is the confidentiality of the deployed models: model
inference execution on the accelerators could leak side-channel information,
which enables an adversary to preciously recover the model details. Such model
extraction attacks can not only compromise the intellectual property of DNN
models, but also facilitate some adversarial attacks.
Although previous works have demonstrated a number of side-channel techniques
to extract models from DNN accelerators, they are not practical for two
reasons. (1) They only target simplified accelerator implementations, which
have limited practicality in the real world. (2) They require heavy human
analysis and domain knowledge. To overcome these limitations, this paper
presents Mercury, the first automated remote side-channel attack against the
off-the-shelf Nvidia DNN accelerator. The key insight of Mercury is to model
the side-channel extraction process as a sequence-to-sequence problem. The
adversary can leverage a time-to-digital converter (TDC) to remotely collect
the power trace of the target model's inference. Then he uses a learning model
to automatically recover the architecture details of the victim model from the
power trace without any prior knowledge. The adversary can further use the
attention mechanism to localize the leakage points that contribute most to the
attack. Evaluation results indicate that Mercury can keep the error rate of
model extraction below 1%
Deep-Learning-Based Radio-Frequency Side-Channel Attack on Quantum Key Distribution
Quantum key distribution (QKD) protocols are proven secure based on
fundamental physical laws, however, the proofs consider a well-defined setting
and encoding of the sent quantum signals only. Side channels, where the encoded
quantum state is correlated with properties of other degrees of freedom of the
quantum channel, allow an eavesdropper to obtain information unnoticeably as
demonstrated in a number of hacking attacks on the quantum channel. Yet, also
classical radiation emitted by the devices may be correlated, leaking
information on the potential key, especially when combined with novel data
analysis methods.
We here demonstrate a side-channel attack using a deep convolutional neural
network to analyze the recorded classical, radio-frequency electromagnetic
emissions. Even at a distance of a few centimeters from the electronics of a
QKD sender employing frequently used electronic components we are able to
recover virtually all information about the secret key. Yet, as shown here,
countermeasures can enable a significant reduction of both the emissions and
the amount of secret key information leaked to the attacker. Our analysis
methods are independent of the actual device and thus provide a starting point
for assessing the presence of classical side channels in QKD devices.Comment: 14 pages, 10 figures. Comments welcome
SoK: Design Tools for Side-Channel-Aware Implementations
Side-channel attacks that leak sensitive information through a computing
device's interaction with its physical environment have proven to be a severe
threat to devices' security, particularly when adversaries have unfettered
physical access to the device. Traditional approaches for leakage detection
measure the physical properties of the device. Hence, they cannot be used
during the design process and fail to provide root cause analysis. An
alternative approach that is gaining traction is to automate leakage detection
by modeling the device. The demand to understand the scope, benefits, and
limitations of the proposed tools intensifies with the increase in the number
of proposals.
In this SoK, we classify approaches to automated leakage detection based on
the model's source of truth. We classify the existing tools on two main
parameters: whether the model includes measurements from a concrete device and
the abstraction level of the device specification used for constructing the
model. We survey the proposed tools to determine the current knowledge level
across the domain and identify open problems. In particular, we highlight the
absence of evaluation methodologies and metrics that would compare proposals'
effectiveness from across the domain. We believe that our results help
practitioners who want to use automated leakage detection and researchers
interested in advancing the knowledge and improving automated leakage
detection
Breaking the Barriers to Specialty Care: Practical Ideas to Improve Health Equity and Reduce Cost - Call to Action for a System-wide Focus on Equity
Tremendous health outcome inequities remain in the U.S. across race and ethnicity, gender and sexual orientation, socio-economic status, and geography—particularly for those with serious conditions such as lung or skin cancer, HIV/AIDS, or cardiovascular disease.These inequities are driven by a complex set of factors—including distance to a specialist, insurance coverage, provider bias, and a patient's housing and healthy food access. These inequities not only harm patients, resulting in avoidable illness and death, they also drive unnecessary health systems costs.This 5-part series highlights the urgent need to address these issues, providing resources such as case studies, data, and recommendations to help the health care sector make meaningful strides toward achieving equity in specialty care.Top TakeawaysThere are vast inequalities in access to and outcomes from specialty health care in the U.S. These inequalities are worst for minority patients, low-income patients, patients with limited English language proficiency, and patients in rural areas.A number of solutions have emerged to improve health outcomes for minority and medically underserved patients. These solutions fall into three main categories: increasing specialty care availability, ensuring high-quality care, and helping patients engage in care.As these inequities are also significant drivers of health costs, payers, health care provider organizations, and policy makers have a strong incentive to invest in solutions that will both improve outcomes and reduce unnecessary costs. These actors play a critical role in ensuring that equity is embedded into core care delivery at scale.Part 5: "Call to Action for a System-wide Focus on Equity"These solutions create value not only for patients, but also for health care providers and public and private payers. Each of these actors have a role to play in scaling and sustaining the health equity solutions.
Fuzzy matching template attacks on multivariate cryptography : a case study
Multivariate cryptography is one of the most promising candidates for post-quantum cryptography. Applying machine learning techniques in this paper, we experimentally investigate the side-channel security of the multivariate cryptosystems, which seriously threatens the hardware implementations of cryptographic systems. Generally, registers are required to store values of monomials and polynomials during the encryption of multivariate cryptosystems. Based on maximum-likelihood and fuzzy matching techniques, we propose a template-based least-square technique to efficiently exploit the side-channel leakage of registers. Using QUAD for a case study, which is a typical multivariate cryptosystem with provable security, we perform our attack against both serial and parallel QUAD implementations on field programmable gate array (FPGA). Experimental results show that our attacks on both serial and parallel implementations require only about 30 and 150 power traces, respectively, to successfully reveal the secret key with a success rate close to 100%. Finally, efficient and low-cost strategies are proposed to resist side-channel attacks
- …