180 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Intégration des méthodes formelles dans le développement des RCSFs
In this thesis, we have relied on formal techniques in order to first evaluate WSN protocols and then to propose solutions that meet the requirements of these networks. The thesis contributes to the modelling, analysis, design and evaluation of WSN protocols.
In this context, the thesis begins with a survey on WSN and formal verification techniques. Focusing on the MAC layer, the thesis reviews proposed MAC protocols for WSN as well as their design challenges. The dissertation then proceeds to outline the contributions of this work.
As a first proposal, we develop a stochastic generic model of the 802.11 MAC protocol for an arbitrary network topology and then perform probabilistic evaluation of the protocol using statistical model checking. Considering an alternative power source to operate WSN, energy harvesting, we move to the second proposal where a protocol designed for EH-WSN is modelled and various performance parameters are evaluated. Finally, the thesis explores mobility in WSN and proposes a new MAC protocol, named "Mobility and Energy Harvesting aware Medium Access Control (MEH-MAC)" protocol for dynamic sensor networks powered by ambient energy. The protocol is modelled and verified under several features
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Beyond Quantity: Research with Subsymbolic AI
How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately
Lessons from Formally Verified Deployed Software Systems (Extended version)
The technology of formal software verification has made spectacular advances,
but how much does it actually benefit the development of practical software?
Considerable disagreement remains about the practicality of building systems
with mechanically-checked proofs of correctness. Is this prospect confined to a
few expensive, life-critical projects, or can the idea be applied to a wide
segment of the software industry?
To help answer this question, the present survey examines a range of
projects, in various application areas, that have produced formally verified
systems and deployed them for actual use. It considers the technologies used,
the form of verification applied, the results obtained, and the lessons that
can be drawn for the software industry at large and its ability to benefit from
formal verification techniques and tools.
Note: a short version of this paper is also available, covering in detail
only a subset of the considered systems. The present version is intended for
full reference.Comment: arXiv admin note: text overlap with arXiv:1211.6186 by other author
Estimating distinguishability measures on quantum computers
The performance of a quantum information processing protocol is ultimately
judged by distinguishability measures that quantify how distinguishable the
actual result of the protocol is from the ideal case. The most prominent
distinguishability measures are those based on the fidelity and trace distance,
due to their physical interpretations. In this paper, we propose and review
several algorithms for estimating distinguishability measures based on trace
distance and fidelity. The algorithms can be used for distinguishing quantum
states, channels, and strategies (the last also known in the literature as
``quantum combs''). The fidelity-based algorithms offer novel physical
interpretations of these distinguishability measures in terms of the maximum
probability with which a single prover (or competing provers) can convince a
verifier to accept the outcome of an associated computation. We simulate many
of these algorithms by using a variational approach with parameterized quantum
circuits. We find that the simulations converge well in both the noiseless and
noisy scenarios, for all examples considered. Furthermore, the noisy
simulations exhibit a parameter noise resilience. Finally, we establish a
strong relationship between various quantum computational complexity classes
and distance estimation problems.Comment: v3: 45 pages, 17 figures, includes new complexity-theoretic results,
showing that several fidelity and distance estimation promise problems are
complete for BQP, QMA, and QMA(2
Bringing data minimization to digital wallets at scale with general-purpose zero-knowledge proofs
Today, digital identity management for individuals is either inconvenient and
error-prone or creates undesirable lock-in effects and violates privacy and
security expectations. These shortcomings inhibit the digital transformation in
general and seem particularly concerning in the context of novel applications
such as access control for decentralized autonomous organizations and
identification in the Metaverse. Decentralized or self-sovereign identity (SSI)
aims to offer a solution to this dilemma by empowering individuals to manage
their digital identity through machine-verifiable attestations stored in a
"digital wallet" application on their edge devices. However, when presented to
a relying party, these attestations typically reveal more attributes than
required and allow tracking end users' activities. Several academic works and
practical solutions exist to reduce or avoid such excessive information
disclosure, from simple selective disclosure to data-minimizing anonymous
credentials based on zero-knowledge proofs (ZKPs). We first demonstrate that
the SSI solutions that are currently built with anonymous credentials still
lack essential features such as scalable revocation, certificate chaining, and
integration with secure elements. We then argue that general-purpose ZKPs in
the form of zk-SNARKs can appropriately address these pressing challenges. We
describe our implementation and conduct performance tests on different edge
devices to illustrate that the performance of zk-SNARK-based anonymous
credentials is already practical. We also discuss further advantages that
general-purpose ZKPs can easily provide for digital wallets, for instance, to
create "designated verifier presentations" that facilitate new design options
for digital identity infrastructures that previously were not accessible
because of the threat of man-in-the-middle attacks
SoK: Vector OLE-Based Zero-Knowledge Protocols
A zero-knowledge proof is a cryptographic protocol where a prover can convince a verifier that a statement is true, without revealing any further information except for the truth of the statement.
More precisely, if is a statement from an NP language verified by an efficient machine , then a zero-knowledge proof aims to prove to the verifier that there exists a witness such that , without revealing any further information about .
The proof is a proof of knowledge, if the prover additionally convinces the verifier that it knows the witness , rather than just of its existence.
This article is a survey of recent developments in building practical systems for zero-knowledge proofs of knowledge using vector oblivious linear evaluation (VOLE), a tool from secure two-party computation
- …