12 research outputs found
Pentimento: Data Remanence in Cloud FPGAs
Cloud FPGAs strike an alluring balance between computational efficiency,
energy efficiency, and cost. It is the flexibility of the FPGA architecture
that enables these benefits, but that very same flexibility that exposes new
security vulnerabilities. We show that a remote attacker can recover "FPGA
pentimenti" - long-removed secret data belonging to a prior user of a cloud
FPGA. The sensitive data constituting an FPGA pentimento is an analog imprint
from bias temperature instability (BTI) effects on the underlying transistors.
We demonstrate how this slight degradation can be measured using a
time-to-digital (TDC) converter when an adversary programs one into the target
cloud FPGA.
This technique allows an attacker to ascertain previously safe information on
cloud FPGAs, even after it is no longer explicitly present. Notably, it can
allow an attacker who knows a non-secret "skeleton" (the physical structure,
but not the contents) of the victim's design to (1) extract proprietary details
from an encrypted FPGA design image available on the AWS marketplace and (2)
recover data loaded at runtime by a previous user of a cloud FPGA using a known
design. Our experiments show that BTI degradation (burn-in) and recovery are
measurable and constitute a security threat to commercial cloud FPGAs.Comment: 17 Pages, 8 Figure
Statistical Metrics of Hardware Security
Hardware security is a fundamental and increasingly important contributor to the trustworthiness of our computing infrastructure. With the proliferation of embedded computers, attacks focused on exploiting hardware vulnerabilities are being discovered rapidly, and mitigation techniques are being proposed and deployed. But how do we determine the effectiveness of a mitigation against attacks that may not yet exist? Methods that answer this question are called security metrics. In situations where an attack relies on either disruption or interception of non-deterministic values, these security metrics are based on statistics. Statistical metrics are based on assumptions about the capabilities of potential attackers and the nature of the measurements. Metrics with assumptions that are too strong can be a source of false confidence, because a favorable metric result does not necessarily indicate a secure device. In this dissertation, I formulate practical metrics in the context of power side-channel analysis and continuous testing of random number generators that do away with many of these limiting assumptions. This dissertation has three major facets. In the first, I present a hardware system and statistic for continuous testing of random number pipelines which can be used irrespective of the underlying probability distribution and requires minimal a priori knowledge about the system to be monitored. The second, the Holistic Assessment Criterion, is a statistic that allows ranking of devices and algorithms with respect to the entire measurement window, and is sensitive to vulnerabilities in the underlying high-dimensional geometry of the measurement vectors. The third Computational Blinking, is an architectural mitigation strategy relying on a metric that ranks times during execution of an algorithm in order of vulnerability
Recommended from our members
Statistical Metrics of Hardware Security
Hardware security is a fundamental and increasingly important contributor to the trustworthiness of our computing infrastructure. With the proliferation of embedded computers, attacks focused on exploiting hardware vulnerabilities are being discovered rapidly, and mitigation techniques are being proposed and deployed. But how do we determine the effectiveness of a mitigation against attacks that may not yet exist? Methods that answer this question are called security metrics. In situations where an attack relies on either disruption or interception of non-deterministic values, these security metrics are based on statistics. Statistical metrics are based on assumptions about the capabilities of potential attackers and the nature of the measurements. Metrics with assumptions that are too strong can be a source of false confidence, because a favorable metric result does not necessarily indicate a secure device. In this dissertation, I formulate practical metrics in the context of power side-channel analysis and continuous testing of random number generators that do away with many of these limiting assumptions. This dissertation has three major facets. In the first, I present a hardware system and statistic for continuous testing of random number pipelines which can be used irrespective of the underlying probability distribution and requires minimal a priori knowledge about the system to be monitored. The second, the Holistic Assessment Criterion, is a statistic that allows ranking of devices and algorithms with respect to the entire measurement window, and is sensitive to vulnerabilities in the underlying high-dimensional geometry of the measurement vectors. The third Computational Blinking, is an architectural mitigation strategy relying on a metric that ranks times during execution of an algorithm in order of vulnerability
Recommended from our members
A Brain-Computer Interface (BCI) for the Detection of Mine-Like Objects in Sidescan Sonar Imagery
Detection of mine-like objects (MLOs) in sidescan sonar imagery is a problem that affects our military in terms of safety and cost. The current process involves large amounts of time for subject matter experts to analyze sonar images searching for MLOs. The automation of the detection process has been heavily researched over the years and some of these computer vision approaches have improved dramatically, providing substantial processing speed benefits. However, the human visual system has an unmatched ability to recognize objects of interest. This paper posits a brain-computer interface (BCI) approach, that combines the complementary benefits of computer vision and human vision. The first stage of the BCI, a Haar-like feature classifier, is cascaded in to the second stage, rapid serial visual presentation (RSVP) of images chips. The RSVP paradigm maximizes throughput while allowing an electroencephalography (EEG) interest classifier to determine the human subjects' recognition of objects. In an additional proposed BCI system we add a third stage that uses a trained support vector machine (SVM) based on the Haar-like features of stage one and the EEG interest scores of stage two. We characterize and show performance improvements for subsets of these BCI systems over the computer vision and human vision capabilities alone
Recommended from our members
iSTELLAR: intermittent Signature aTtenuation Embedded CRYPTO with Low-Level metAl Routing
Quantitative Analysis of Timing Channel Security in Cryptographic Hardware Design
Cryptographic cores are known to leak information about their private key due to runtime variations, and there are many well-known attacks that can exploit this timing channel. In this paper, we study how information theoretic measures can quantify the amount of key leakage that can be exacted from runtime measurements. We develop and analyze 22 Rivest-Shamir-Adleman (RSA) hardware designs - each with unique performance optimizations, timing channel mitigation techniques, or discretization/randomization countermeasures. We demonstrate the effectiveness of information theoretic measures for quantifying timing leakage through correlation analysis of information theoretic measurements and attack results. Experimental results show that mutual information is a promising technique for quantifying timing leakage for RSA, advanced encryption standard, and elliptic curve cryptography ciphers, i.e., the mutual information correlates to being able to successfully guess the value of the private key. This is an important step toward a hardware security metric which allows designers to reason about security alongside traditional hardware design metrics like area, performance, and power