4,637 research outputs found
Computational Soundness for Dalvik Bytecode
Automatically analyzing information flow within Android applications that
rely on cryptographic operations with their computational security guarantees
imposes formidable challenges that existing approaches for understanding an
app's behavior struggle to meet. These approaches do not distinguish
cryptographic and non-cryptographic operations, and hence do not account for
cryptographic protections: f(m) is considered sensitive for a sensitive message
m irrespective of potential secrecy properties offered by a cryptographic
operation f. These approaches consequently provide a safe approximation of the
app's behavior, but they mistakenly classify a large fraction of apps as
potentially insecure and consequently yield overly pessimistic results.
In this paper, we show how cryptographic operations can be faithfully
included into existing approaches for automated app analysis. To this end, we
first show how cryptographic operations can be expressed as symbolic
abstractions within the comprehensive Dalvik bytecode language. These
abstractions are accessible to automated analysis, and they can be conveniently
added to existing app analysis tools using minor changes in their semantics.
Second, we show that our abstractions are faithful by providing the first
computational soundness result for Dalvik bytecode, i.e., the absence of
attacks against our symbolically abstracted program entails the absence of any
attacks against a suitable cryptographic program realization. We cast our
computational soundness result in the CoSP framework, which makes the result
modular and composable.Comment: Technical report for the ACM CCS 2016 conference pape
Formal Verification of Security Protocol Implementations: A Survey
Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac
The Computational Complexity of Estimating Convergence Time
An important problem in the implementation of Markov Chain Monte Carlo
algorithms is to determine the convergence time, or the number of iterations
before the chain is close to stationarity. For many Markov chains used in
practice this time is not known. Even in cases where the convergence time is
known to be polynomial, the theoretical bounds are often too crude to be
practical. Thus, practitioners like to carry out some form of statistical
analysis in order to assess convergence. This has led to the development of a
number of methods known as convergence diagnostics which attempt to diagnose
whether the Markov chain is far from stationarity. We study the problem of
testing convergence in the following settings and prove that the problem is
hard in a computational sense: Given a Markov chain that mixes rapidly, it is
hard for Statistical Zero Knowledge (SZK-hard) to distinguish whether starting
from a given state, the chain is close to stationarity by time t or far from
stationarity at time ct for a constant c. We show the problem is in AM
intersect coAM. Second, given a Markov chain that mixes rapidly it is coNP-hard
to distinguish whether it is close to stationarity by time t or far from
stationarity at time ct for a constant c. The problem is in coAM. Finally, it
is PSPACE-complete to distinguish whether the Markov chain is close to
stationarity by time t or far from being mixed at time ct for c at least 1
Quantum proofs can be verified using only single qubit measurements
QMA (Quantum Merlin Arthur) is the class of problems which, though
potentially hard to solve, have a quantum solution which can be verified
efficiently using a quantum computer. It thus forms a natural quantum version
of the classical complexity class NP (and its probabilistic variant MA,
Merlin-Arthur games), where the verifier has only classical computational
resources. In this paper, we study what happens when we restrict the quantum
resources of the verifier to the bare minimum: individual measurements on
single qubits received as they come, one-by-one. We find that despite this
grave restriction, it is still possible to soundly verify any problem in QMA
for the verifier with the minimum quantum resources possible, without using any
quantum memory or multiqubit operations. We provide two independent proofs of
this fact, based on measurement based quantum computation and the local
Hamiltonian problem, respectively. The former construction also applies to
QMA, i.e., QMA with one-sided error.Comment: 7 pages, 1 figur
Generalized Quantum Arthur-Merlin Games
This paper investigates the role of interaction and coins in public-coin
quantum interactive proof systems (also called quantum Arthur-Merlin games).
While prior works focused on classical public coins even in the quantum
setting, the present work introduces a generalized version of quantum
Arthur-Merlin games where the public coins can be quantum as well: the verifier
can send not only random bits, but also halves of EPR pairs. First, it is
proved that the class of two-turn quantum Arthur-Merlin games with quantum
public coins, denoted qq-QAM in this paper, does not change by adding a
constant number of turns of classical interactions prior to the communications
of the qq-QAM proof systems. This can be viewed as a quantum analogue of the
celebrated collapse theorem for AM due to Babai. To prove this collapse
theorem, this paper provides a natural complete problem for qq-QAM: deciding
whether the output of a given quantum circuit is close to a totally mixed
state. This complete problem is on the very line of the previous studies
investigating the hardness of checking the properties related to quantum
circuits, and is of independent interest. It is further proved that the class
qq-QAM_1 of two-turn quantum-public-coin quantum Arthur-Merlin proof systems
with perfect completeness gives new bounds for standard well-studied classes of
two-turn interactive proof systems. Finally, the collapse theorem above is
extended to comprehensively classify the role of interaction and public coins
in quantum Arthur-Merlin games: it is proved that, for any constant m>1, the
class of problems having an m-turn quantum Arthur-Merlin proof system is either
equal to PSPACE or equal to the class of problems having a two-turn quantum
Arthur-Merlin game of a specific type, which provides a complete set of quantum
analogues of Babai's collapse theorem.Comment: 31 pages + cover page, the proof of Lemma 27 (Lemma 24 in v1) is
corrected, and a new completeness result is adde
Physical Randomness Extractors: Generating Random Numbers with Minimal Assumptions
How to generate provably true randomness with minimal assumptions? This
question is important not only for the efficiency and the security of
information processing, but also for understanding how extremely unpredictable
events are possible in Nature. All current solutions require special structures
in the initial source of randomness, or a certain independence relation among
two or more sources. Both types of assumptions are impossible to test and
difficult to guarantee in practice. Here we show how this fundamental limit can
be circumvented by extractors that base security on the validity of physical
laws and extract randomness from untrusted quantum devices. In conjunction with
the recent work of Miller and Shi (arXiv:1402:0489), our physical randomness
extractor uses just a single and general weak source, produces an arbitrarily
long and near-uniform output, with a close-to-optimal error, secure against
all-powerful quantum adversaries, and tolerating a constant level of
implementation imprecision. The source necessarily needs to be unpredictable to
the devices, but otherwise can even be known to the adversary.
Our central technical contribution, the Equivalence Lemma, provides a general
principle for proving composition security of untrusted-device protocols. It
implies that unbounded randomness expansion can be achieved simply by
cross-feeding any two expansion protocols. In particular, such an unbounded
expansion can be made robust, which is known for the first time. Another
significant implication is, it enables the secure randomness generation and key
distribution using public randomness, such as that broadcast by NIST's
Randomness Beacon. Our protocol also provides a method for refuting local
hidden variable theories under a weak assumption on the available randomness
for choosing the measurement settings.Comment: A substantial re-writing of V2, especially on model definitions. An
abstract model of robustness is added and the robustness claim in V2 is made
rigorous. Focuses on quantum-security. A future update is planned to address
non-signaling securit
- …