121,880 research outputs found
One Fault is All it Needs: Breaking Higher-Order Masking with Persistent Fault Analysis
Persistent fault analysis (PFA) was proposed at CHES 2018 as a novel fault analysis technique. It was shown to completely defeat standard redundancy based countermeasure against fault analysis. In this work, we investigate the security of masking schemes against PFA. We show that with only one fault injection, masking countermeasures can be broken at any masking order. The study is performed on publicly available implementations of masking
DLPFA: Deep Learning based Persistent Fault Analysis against Block Ciphers
Deep learning techniques have been widely applied to side-channel analysis (SCA) in recent years and shown better performance compared with traditional methods. However, there has been little research dealing with deep learning techniques in fault analysis to date. This article undertakes the first study to introduce deep learning techniques into fault analysis to perform key recovery. We investigate the application of multi-layer perceptron (MLP) and convolutional neural network (CNN) in persistent fault analysis (PFA) and propose deep learning-based persistent fault analysis (DLPFA). DLPFA is first applied to advanced encryption standard (AES) to verify its availability. Then, to push the study further, we extend DLPFA to PRESENT, which is a lightweight substitution–permutation network (SPN)-based block cipher. The experimental results show that DLPFA can handle random faults and provide outstanding performance with a suitable selection of hyper-parameters
The Parallel Persistent Memory Model
We consider a parallel computational model that consists of processors,
each with a fast local ephemeral memory of limited size, and sharing a large
persistent memory. The model allows for each processor to fault with bounded
probability, and possibly restart. On faulting all processor state and local
ephemeral memory are lost, but the persistent memory remains. This model is
motivated by upcoming non-volatile memories that are as fast as existing random
access memory, are accessible at the granularity of cache lines, and have the
capability of surviving power outages. It is further motivated by the
observation that in large parallel systems, failure of processors and their
caches is not unusual.
Within the model we develop a framework for developing locality efficient
parallel algorithms that are resilient to failures. There are several
challenges, including the need to recover from failures, the desire to do this
in an asynchronous setting (i.e., not blocking other processors when one
fails), and the need for synchronization primitives that are robust to
failures. We describe approaches to solve these challenges based on breaking
computations into what we call capsules, which have certain properties, and
developing a work-stealing scheduler that functions properly within the context
of failures. The scheduler guarantees a time bound of in expectation, where and are the work and
depth of the computation (in the absence of failures), is the average
number of processors available during the computation, and is the
probability that a capsule fails. Within the model and using the proposed
methods, we develop efficient algorithms for parallel sorting and other
primitives.Comment: This paper is the full version of a paper at SPAA 2018 with the same
nam
Algorithm-Directed Crash Consistence in Non-Volatile Memory for HPC
Fault tolerance is one of the major design goals for HPC. The emergence of
non-volatile memories (NVM) provides a solution to build fault tolerant HPC.
Data in NVM-based main memory are not lost when the system crashes because of
the non-volatility nature of NVM. However, because of volatile caches, data
must be logged and explicitly flushed from caches into NVM to ensure
consistence and correctness before crashes, which can cause large runtime
overhead.
In this paper, we introduce an algorithm-based method to establish crash
consistence in NVM for HPC applications. We slightly extend application data
structures or sparsely flush cache blocks, which introduce ignorable runtime
overhead. Such extension or cache flushing allows us to use algorithm knowledge
to \textit{reason} data consistence or correct inconsistent data when the
application crashes. We demonstrate the effectiveness of our method for three
algorithms, including an iterative solver, dense matrix multiplication, and
Monte-Carlo simulation. Based on comprehensive performance evaluation on a
variety of test environments, we demonstrate that our approach has very small
runtime overhead (at most 8.2\% and less than 3\% in most cases), much smaller
than that of traditional checkpoint, while having the same or less
recomputation cost.Comment: 12 page
Integrating testing techniques through process programming
Integration of multiple testing techniques is required to demonstrate high quality of software. Technique integration has three basic goals: incremental testing capabilities, extensive error detection, and cost-effective application. We are experimenting with the use of process programming as a mechanism of integrating testing techniques. Having set out to integrate DATA FLOW testing and RELAY, we proposed synergistic use of these techniques to achieve all three goals. We developed a testing process program much as we would develop a software product from requirements through design to implementation and evaluation. We found process programming to be effective for explicitly integrating the techniques and achieving the desired synergism. Used in this way, process programming also mitigates many of the other problems that plague testing in the software development process
Critical features in electromagnetic anomalies detected prior to the L'Aquila earthquake
Electromagnetic (EM) emissions in a wide frequency spectrum ranging from kHz
to MHz are produced by opening cracks, which can be considered as the so-called
precursors of general fracture. We emphasize that the MHz radiation appears
earlier than the kHz in both laboratory and geophysical scale. An important
challenge in this field of research is to distinguish characteristic epochs in
the evolution of precursory EM activity and identify them with the equivalent
last stages in the earthquake (EQ) preparation process. Recently, we proposed
the following two epochs/stages model: (i) The second epoch, which includes the
finally emerged strong impulsive kHz EM emission is due to the fracture of the
high strength large asperities that are distributed along the activated fault
sustaining the system. (ii) The first epoch, which includes the initially
emerged MHz EM radiation is thought to be due to the fracture of a highly
heterogeneous system that surrounds the family of asperities. A catastrophic EQ
of magnitude Mw = 6.3 occurred on 06/04/2009 in central Italy. The majority of
the damage occurred in the city of L'Aquila. Clear kHz - MHz EM anomalies have
been detected prior to the L'Aquila EQ. Herein, we investigate the seismogenic
origin of the detected MHz anomaly. The analysis in terms of intermittent
dynamics of critical fluctuations reveals that the candidate EM precursor: (i)
can be described in analogy with a thermal continuous phase transition; (ii)
has anti-persistent behaviour. These features suggest that the emerged
candidate precursor could be triggered by microfractures in the highly
disordered system that surrounded the backbone of asperities of the activated
fault. We introduce a criterion for an underlying strong critical behavior.Comment: 8 pages, 6 figure
Persistent termini of 2004- and 2005-like ruptures of the Sunda megathrust
To gain insight into the longevity of subduction zone segmentation, we use coral microatolls to examine an 1100-year record of large earthquakes across the boundary of the great 2004 and 2005 Sunda megathrust ruptures. Simeulue, a 100-km-long island off the west coast of northern Sumatra, Indonesia, straddles this boundary: northern Simeulue was uplifted in the 2004 earthquake, whereas southern Simeulue rose in 2005. Northern Simeulue corals reveal that predecessors of the 2004 earthquake occurred in the 10th century AD, in AD 1394 ± 2, and in AD 1450 ± 3. Corals from southern Simeulue indicate that none of the major uplifts inferred on northern Simeulue in the past 1100 years extended to southern Simeulue. The two largest uplifts recognized at a south-central Simeulue site—around AD 1422 and in 2005—involved little or no uplift of northern Simeulue. The distribution of uplift and strong shaking during a historical earthquake in 1861 suggests the 1861 rupture area was also restricted to south of central Simeulue, as in 2005. The strikingly different histories of the two adjacent patches demonstrate that this boundary has persisted as an impediment to rupture through at least seven earthquakes in the past 1100 years. This implies that the rupture lengths, and hence sizes, of at least some future great earthquakes and tsunamis can be forecast. These microatolls also provide insight into megathrust behavior between earthquakes, revealing sudden and substantial changes in interseismic strain accumulation rates
- …