293 research outputs found
Recovering Secrets From Prefix-Dependent Leakage
We discuss how to recover a secret bitstring given partial information obtained during a computation over that string, assuming the computation is a deterministic algorithm processing the secret bits sequentially. That abstract situation models certain types of side-channel attacks against discrete logarithm and RSA-based cryptosystems, where the adversary obtains information not on the secret exponent directly, but instead on the group or ring element that varies at each step of the exponentiation algorithm.
Our main result shows that for a leakage of a single bit per iteration, under suitable statistical independence assumptions, one can recover the whole secret bitstring in polynomial time. We also discuss how to cope with imperfect leakage, extend the model to -bit leaks, and show how our algorithm yields attacks on popular cryptosystems such as (EC)DSA
MicroWalk: A Framework for Finding Side Channels in Binaries
Microarchitectural side channels expose unprotected software to information
leakage attacks where a software adversary is able to track runtime behavior of
a benign process and steal secrets such as cryptographic keys. As suggested by
incremental software patches for the RSA algorithm against variants of
side-channel attacks within different versions of cryptographic libraries,
protecting security-critical algorithms against side channels is an intricate
task. Software protections avoid leakages by operating in constant time with a
uniform resource usage pattern independent of the processed secret. In this
respect, automated testing and verification of software binaries for
leakage-free behavior is of importance, particularly when the source code is
not available. In this work, we propose a novel technique based on Dynamic
Binary Instrumentation and Mutual Information Analysis to efficiently locate
and quantify memory based and control-flow based microarchitectural leakages.
We develop a software framework named \tool~for side-channel analysis of
binaries which can be extended to support new classes of leakage. For the first
time, by utilizing \tool, we perform rigorous leakage analysis of two
widely-used closed-source cryptographic libraries: \emph{Intel IPP} and
\emph{Microsoft CNG}. We analyze different cryptographic implementations
consisting of million instructions in about minutes of CPU time. By
locating previously unknown leakages in hardened implementations, our results
suggest that \tool~can efficiently find microarchitectural leakages in software
binaries
SoK: Memorization in General-Purpose Large Language Models
Large Language Models (LLMs) are advancing at a remarkable pace, with myriad
applications under development. Unlike most earlier machine learning models,
they are no longer built for one specific application but are designed to excel
in a wide range of tasks. A major part of this success is due to their huge
training datasets and the unprecedented number of model parameters, which allow
them to memorize large amounts of information contained in the training data.
This memorization goes beyond mere language, and encompasses information only
present in a few documents. This is often desirable since it is necessary for
performing tasks such as question answering, and therefore an important part of
learning, but also brings a whole array of issues, from privacy and security to
copyright and beyond. LLMs can memorize short secrets in the training data, but
can also memorize concepts like facts or writing styles that can be expressed
in text in many different ways. We propose a taxonomy for memorization in LLMs
that covers verbatim text, facts, ideas and algorithms, writing styles,
distributional properties, and alignment goals. We describe the implications of
each type of memorization - both positive and negative - for model performance,
privacy, security and confidentiality, copyright, and auditing, and ways to
detect and prevent memorization. We further highlight the challenges that arise
from the predominant way of defining memorization with respect to model
behavior instead of model weights, due to LLM-specific phenomena such as
reasoning capabilities or differences between decoding algorithms. Throughout
the paper, we describe potential risks and opportunities arising from
memorization in LLMs that we hope will motivate new research directions
Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints
As neural networks continue their reach into nearly every aspect of software
operations, the details of those networks become an increasingly sensitive
subject. Even those that deploy neural networks embedded in physical devices
may wish to keep the inner working of their designs hidden -- either to protect
their intellectual property or as a form of protection from adversarial inputs.
The specific problem we address is how, through heavy system stack, given noisy
and imperfect memory traces, one might reconstruct the neural network
architecture including the set of layers employed, their connectivity, and
their respective dimension sizes. Considering both the intra-layer architecture
features and the inter-layer temporal association information introduced by the
DNN design empirical experience, we draw upon ideas from speech recognition to
solve this problem. We show that off-chip memory address traces and PCIe events
provide ample information to reconstruct such neural network architectures
accurately. We are the first to propose such accurate model extraction
techniques and demonstrate an end-to-end attack experimentally in the context
of an off-the-shelf Nvidia GPU platform with full system stack. Results show
that the proposed techniques achieve a high reverse engineering accuracy and
improve the one's ability to conduct targeted adversarial attack with success
rate from 14.6\%25.5\% (without network architecture knowledge) to 75.9\%
(with extracted network architecture)
Spectre Declassified: Reading from the Right Place at the Wrong Time
Practical information-flow programming languages commonly allow controlled leakage via a âdeclassifyâ constructâprogrammers can use this construct to declare intentional leakage. For instance, cryptographic signatures and ciphertexts, which are computed from private keys, are viewed as secret by information-flow analyses. Cryptographic libraries can use declassify to make this data public, as it is no longer sensitive.
In this paper, we study the impact of speculative execution in practical information-flow programming languages. First, we show that speculative execution leads to unintended leakage that violates the programmerâs intent. Concretely, we present a PoC that recovers the AES key of an implementation of AES written in FaCT, a domain-specific language for constant-time programming. Our PoC is an instance of a Spectre-PHT attack; interestingly, it remains effective even if the program is compiled with Speculative Load Hardening (SLH), a compiler-based countermeasure against Spectre-PHT. Second, we propose compiler-based countermeasures for protecting programs against leakage, and show that these countermeasures achieve relative non-interference: Informally, speculative leakage of the transformed programs must correspond to sequential leakage of the original programs. One of our countermeasures is a new transformation of independent interest called selective speculative load hardening (selSLH). SelSLH optimizes SLH as implemented by the LLVM compiler, reducing the number of inserted mitigations. Third, we implement one of our countermeasures in the FaCT compiler and evaluate performance overhead for core cryptographic routines from several open-source projects. The results indicate a moderate overhead. Although we do not implement selSLH, we carry a preliminary evaluation which suggests a significant gain over SLH for cryptographic implementations
Differential Power Analysis of HMAC SHA-2 in the Hamming Weight Model
International audienceAs any algorithm manipulating secret data, HMAC is potentially vulnerable to side channel attacks. In 2007, McEvoy et al. proposed a differential power analysis attack against HMAC instantiated with hash functions from the SHA-2 family. Their attack works in the Hamming distance leakage model and makes strong assumptions on the target implementation. In this paper, we present an attack on HMAC SHA-2 in the Hamming weight leakage model, which advantageously can be used when no information is available on the targeted implementation. Furthermore, our attack can be adapted to the Hamming distance model with weaker assumptions on the implementation. We show the feasibility of our attack on simulations, and we study its overall cost and success rate. We also provide an evaluation of the performance overhead induced by the countermeasures necessary to avoid the attack
Contextualizing Alternative Models of Secret Sharing
A secret sharing scheme is a means of distributing information to a set of players such that any authorized subset of players can recover a secret and any unauthorized subset does not learn any information about the secret. In over forty years of research in secret sharing, there has been an emergence of new models and extended capabilities of secret sharing schemes. In this thesis, we study various models of secret sharing and present them in a consistent manner to provide context for each definition. We discuss extended capabilities of secret sharing schemes, including a comparison of methods for updating secrets via local computations on shares and an analysis of approaches to reproducing/repairing shares. We present an analysis of alternative adversarial settings which have been considered in the area of secret sharing. In this work, we present a formalization of a deniability property which is inherent to some classical secret sharing schemes. We provide new, game-based definitions for different notions of verifiability and robustness. By using consistent terminology and similar game-based definitions, we are able to demystify the subtle differences in each notion raised in the literature
Principles of Security and Trust
This open access book constitutes the proceedings of the 8th International Conference on Principles of Security and Trust, POST 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conference on Theory and Practice of Software, ETAPS 2019. The 10 papers presented in this volume were carefully reviewed and selected from 27 submissions. They deal with theoretical and foundational aspects of security and trust, including on new theoretical results, practical applications of existing foundational ideas, and innovative approaches stimulated by pressing practical problems
- âŠ