1,116 research outputs found
Strong fairness and ultra metrics
AbstractWe answer an open question of Costa and Hennessy and present a characterization of the infinite fair computations in finite labeled transition systems—without any structure of the states—as cluster points in metric spaces. This technique is applied to reduce the logical complexity of several known fairness concepts from Π03 to Π02 and from Σ11 to Π03, respectively
LeakyOhm: Secret Bits Extraction using Impedance Analysis
The threats of physical side-channel attacks and their countermeasures have
been widely researched. Most physical side-channel attacks rely on the
unavoidable influence of computation or storage on current consumption or
voltage drop on a chip. Such data-dependent influence can be exploited by, for
instance, power or electromagnetic analysis. In this work, we introduce a novel
non-invasive physical side-channel attack, which exploits the data-dependent
changes in the impedance of the chip. Our attack relies on the fact that the
temporarily stored contents in registers alter the physical characteristics of
the circuit, which results in changes in the die's impedance. To sense such
impedance variations, we deploy a well-known RF/microwave method called
scattering parameter analysis, in which we inject sine wave signals with high
frequencies into the system's power distribution network (PDN) and measure the
echo of the signals. We demonstrate that according to the content bits and
physical location of a register, the reflected signal is modulated differently
at various frequency points enabling the simultaneous and independent probing
of individual registers. Such side-channel leakage challenges the -probing
security model assumption used in masking, which is a prominent side-channel
countermeasure. To validate our claims, we mount non-profiled and profiled
impedance analysis attacks on hardware implementations of unprotected and
high-order masked AES. We show that in the case of the profiled attack, only a
single trace is required to recover the secret key. Finally, we discuss how a
specific class of hiding countermeasures might be effective against impedance
leakage
Runtime-guided mitigation of manufacturing variability in power-constrained multi-socket NUMA nodes
This work has been supported by the Spanish Government (Severo Ochoa grants SEV2015-0493, SEV-2011-00067), by
the Spanish Ministry of Science and Innovation (contracts TIN2015-65316-P), by Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), by the RoMoL ERC Advanced Grant (GA 321253) and the European HiPEAC Network of Excellence. M. MoretĂł has been partially supported by the Ministry of Economy and Competitiveness under Juan de la Cierva postdoctoral fellowship number JCI-2012-15047. M. Casas is supported by the Secretary for Universities and Research of the Ministry of Economy and Knowledge of the Government of Catalonia and the Cofund
programme of the Marie Curie Actions of the 7th R&D Framework Programme of the European Union (Contract 2013 BP B 00243). This work was also partially performed
under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-CONF-689878).
Finally, the authors are grateful to the reviewers for their valuable comments, to the RoMoL team, to Xavier Teruel and Kallia Chronaki from the Programming Models group
of BSC and the Computation Department of LLNL for their technical support and useful feedback.Peer ReviewedPostprint (published version
MVG Mechanism: Differential Privacy under Matrix-Valued Query
Differential privacy mechanism design has traditionally been tailored for a
scalar-valued query function. Although many mechanisms such as the Laplace and
Gaussian mechanisms can be extended to a matrix-valued query function by adding
i.i.d. noise to each element of the matrix, this method is often suboptimal as
it forfeits an opportunity to exploit the structural characteristics typically
associated with matrix analysis. To address this challenge, we propose a novel
differential privacy mechanism called the Matrix-Variate Gaussian (MVG)
mechanism, which adds a matrix-valued noise drawn from a matrix-variate
Gaussian distribution, and we rigorously prove that the MVG mechanism preserves
-differential privacy. Furthermore, we introduce the concept
of directional noise made possible by the design of the MVG mechanism.
Directional noise allows the impact of the noise on the utility of the
matrix-valued query function to be moderated. Finally, we experimentally
demonstrate the performance of our mechanism using three matrix-valued queries
on three privacy-sensitive datasets. We find that the MVG mechanism notably
outperforms four previous state-of-the-art approaches, and provides comparable
utility to the non-private baseline.Comment: Appeared in CCS'1
A Classification of Models for Concurrency
Models for concurrency can be classified with respect to the three relevant parameters: behaviour/system, interleaving/noninterleaving, linear/branching time. When modelling a process, a choice concerning such parameters corresponds to choosing the level of abstraction of the resulting semantics. The classifications are formalised through the medium of category theory
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
Performing machine learning (ML) computation on private data while
maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an
emergent field of research. Recently, PPML has seen a visible shift towards the
adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy
computation that it entails. In the SOC paradigm, computation is outsourced to
a set of powerful and specially equipped servers that provide service on a
pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for
a range of ML algorithms in SOC setting, that guarantees output delivery to the
users irrespective of any adversarial behaviour. Robustness, a highly desirable
feature, evokes user participation without the fear of denial of service.
At the heart of our framework lies a highly-efficient, maliciously-secure,
three-party computation (3PC) over rings that provides guaranteed output
delivery (GOD) in the honest-majority setting. To the best of our knowledge,
SWIFT is the first robust and efficient PPML framework in the 3PC setting.
SWIFT is as fast as (and is strictly better in some cases than) the best-known
3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We
extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as
fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20)
and twice faster than the best-known robust 4PC framework FLASH (Byali et al.
PETS'20).
We demonstrate our framework's practical relevance by benchmarking popular ML
algorithms such as Logistic Regression and deep Neural Networks such as VGG16
and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results
testify to our claims that we provide improved security guarantee while
incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.Comment: This article is the full and extended version of an article to appear
in USENIX Security 202
- …