6 research outputs found
Formal methods in the design of cryptographic protocols (state of the art)
This paper is a state of the art review of the use of formal methods in the design of cryptographic protocols
UC Non-Interactive, Proactive, Threshold ECDSA with Identifiable Aborts
Building on the Gennaro & Goldfeder and Lindell & Nof protocols (CCS \u2718), we present threshold ECDSA protocols, for any number of signatories and any threshold, that improve as follows over the state of the art:
* Only the last round of our protocols requires knowledge of the message, and the other rounds can take place in a preprocessing stage, lending to a non-interactive threshold ECDSA protocol.
* Our protocols withstand adaptive corruption of signatories. Furthermore, they include a periodic refresh mechanism and offer full proactive security.
* Our protocols realize an ideal threshold signature functionality within the UC framework, in the global random oracle model, assuming Strong RSA, DDH, semantic security of the Paillier encryption, and a somewhat enhanced variant of existential unforgeability of ECDSA.
* Both protocols achieve accountability by identifying corrupted signatories in case of failure to generate a valid signature.
The protocols provide a tradeoff between the number of rounds to generate a signature and the computational and communication overhead for the identification of corrupted signatories. Namely:
* For one protocol, signature generation takes only 4 rounds (down from the current state of the art of 8 rounds), but the identification process requires computation and communication that is quadratic in the number of parties.
* For the other protocol, the identification process requires computation and communication that is only linear in the number of parties, but signature generation takes 7 rounds.
These properties (low latency, compatibility with cold-wallet architectures, proactive security, identifiable abort and composable security) make the two protocols ideal for threshold wallets for ECDSA-based cryptocurrencies
Pursuing the Limits of Cryptography
Modern cryptography has gone beyond traditional notions of encryption, allowing for new applications such as digital signatures, software obfuscation among others. While cryptography might seem like a magical tool for one's privacy needs, there are mathematical limitations to what cryptography can achieve. In this thesis we focus on understanding what lies on the boundary of what cryptography enables. In particular, we focus on three specific aspects that we elaborate on below.
Necessity of Randomness in Zero-Knowledge Protocols: A Zero-Knowledge protocol consists of an interaction between two parties, designated prover and verifier, where the prover is trying to convince the verifier of the validity of a statement without revealing anything beyond the validity. We study the necessity of randomness, a scarce resource, in such protocols. Prior works have shown that for most settings, the prover necessarily *requires* randomness to run any such protocol. We show, somewhat surprisingly, one can design protocols where a prover requires *no* randomness.
Minimizing Interaction in Secure Computation Protocols: The next part of the thesis focuses on one of the most general notions in cryptography, that of *secure computation*. It allows mutually distrusting parties to jointly compute a function over a network without revealing anything but the output of the computation. Considering that these protocols are going to be run on high-latency networks such as the internet, it is imperative that we design protocols to minimize the interaction between participants of the protocol. Prior works have established lower bounds on the amount of interaction, and in our work we show that these lower bounds are tight by constructing new protocols that are also optimal in their assumptions.
Circumventing Impossibilities with Blockchains: In some cases, there are desired usages of secure computations protocols that are provably impossible on the (regular) Internet, i.e. existing protocols can no longer be proven secure when multiple concurrent instances of the protocol are executed. We show that by assuming the existence of a secure blockchain, a minimal additional trust assumption, we can push past the boundaries of what is cryptographically possible by constructing *new* protocols that are provably secure on the Internet
Using Quantum Resources for Security and Computation
Quantum mechanics and information theory have jointly impacted multiple fields. Two in particular are security and computing. Via the use of quantum resources, exploits in currently used digital security systems are known, whilst the theory also promises security for future systems. Quantum theory has been shown to have fundamental impacts on computing technology, but modern experimental hardware is limited in power and use cases. This thesis is concerned with developments in the use of quantum resources in both fields. Physically unclonable functions (PUFs), a static form of entropy source with uses in hardware-based cryptography, are investigated. Utilising colloidal quantum dot based ink in order to fabricate a series of optical PUF (OPUF) devices, the reliable transformation of (classical) optical information whose source’s fundamental optical properties are governed by quantum theory into a unique fingerprint for further processing in cryptographic protocols is explored. First, the ability to use only a smartphone device to both excite, and capture the optical emission of, an OPUF is explored. It is shown that these images can be reliably converted into binary keys via two algorithms. Next, a novel type of OPUF is proposed. Two inks, each comprised of quantum dots with peak emission at different wavelengths are used to fabricated a device which produces two, separable responses under a single optical challenge. The correlation between two outputs from a given device is found to be inconsistent, with the cause for such inconsistencies explored. Finally, by making use of a hybrid quantum-classical computing method, an algorithm for learning the preparation circuit of an unknown mixed state is defined. In order to combat known issues with scalability of current hardware, this work explores the possibility of reformulating the well-known Hilbert-Schmidt distance using local quantum objects. A variety of functions are investigated, with the final answer remaining open
Lightweight symmetric cryptography
The Internet of Things is one of the principal trends in information
technology nowadays. The main idea behind this concept is that devices
communicate autonomously with each other over the Internet. Some of
these devices have extremely limited resources, such as power and energy,
available time for computations, amount of silicon to produce the chip,
computational power, etc. Classical cryptographic primitives are often
infeasible for such constrained devices. The goal of lightweight
cryptography is to introduce cryptographic solutions with reduced resource
consumption, but with a sufficient security level.
Although this research area was of great interest to academia during the
last years and a large number of proposals for lightweight cryptographic
primitives have been introduced, almost none of them are used in real-word.
Probably one of the reasons is that, for academia, lightweight usually
meant to design cryptographic primitives such that they require minimal
resources among all existing solutions. This exciting research problem
became an important driver which allowed the academic community to better
understand many cryptographic design concepts and to develop new attacks.
However, this criterion does not seem to be the most important one for
industry, where lightweight may be considered as "rightweight". In other
words, a given cryptographic solution just has to fit the constraints of
the specific use cases rather than to be the smallest. Unfortunately,
academic researchers tended to neglect vital properties of the particular
types of devices, into which they intended to apply their primitives. That
is, often solutions were proposed where the usage of some resources was
reduced to a minimum. However, this was achieved by introducing new costs
which were not appropriately taken into account or in such a way that the
reduction of costs also led to a decrease in the security level. Hence,
there is a clear gap between academia and industry in understanding what
lightweight cryptography is. In this work, we are trying to fill some of
these gaps. We carefully investigate a broad number of existing lightweight
cryptographic primitives proposed by academia including authentication
protocols, stream ciphers, and block ciphers and evaluate their
applicability for real-world scenarios. We then look at how individual
components of design of the primitives influence their cost and summarize
the steps to be taken into account when designing primitives for concrete
cost optimization, more precisely - for low energy consumption. Next, we
propose new implementation techniques for existing designs making them more
efficient or smaller in hardware without the necessity to pay any
additional costs. After that, we introduce a new stream cipher design
philosophy which enables secure stream ciphers with smaller area size than
ever before and, at the same time, considerably higher throughput compared
to any other encryption schemes of similar hardware cost. To demonstrate
the feasibility of our findings we propose two ciphers with the smallest
area size so far, namely Sprout and Plantlet, and the most energy
efficient encryption scheme called Trivium-2. Finally, this thesis solves
a concrete industrial problem. Based on standardized cryptographic
solutions, we design an end-to-end data-protection scheme for low power
networks. This scheme was deployed on the water distribution network in the
City of Antibes, France
Building blocks for secure services:authenticated key transport and rational exchange protocols
This thesis is concerned with two security mechanisms: authenticated key transport and rational exchange protocols. These mechanisms are potential building blocks in the security architecture of a range of different services. Authenticated key transport protocols are used to build secure channels between entities, which protect their communications against eaves-dropping and alteration by an outside attacker. In contrast, rational exchange protocols can be used to protect the entities involved in an exchange transaction from each other. This is important, because often the entities do not trust each other, and both fear that the other will gain an advantage by misbehaving. Rational exchange protocols alleviate this problem by ensuring that a misbehaving party cannot gain any advantages. This means that misbehavior becomes uninteresting and it should happen only rarely. The thesis is focused on the construction of formal models for authenticated key transport and rational exchange protocols. In the first part of the thesis, we propose a formal model for key transport protocols, which is based on a logic of belief. Building on this model, we also propose an original systematic protocol construction approach. The main idea is that we reverse some implications that can be derived from the axioms of the logic, and turn them into synthesis rules. The synthesis rules can be used to construct a protocol and to derive a set of assumptions starting from a set of goals. The main advantage is that the resulting protocol is guaranteed to be correct in the sense that all the specified goals can be derived from the protocol and the assumptions using the underlying logic. Another important advantage is that all the assumptions upon which the correctness of the protocol depends are made explicit. The protocol obtained in the synthesis process is an abstract protocol, in which idealized messages that contain logical formulae are sent on channels with various access properties. The abstract protocol can then be implemented in several ways by replacing the idealized messages and the channels with appropriate bit strings and cryptographic primitives, respectively. We illustrate the usage of the logic and the synthesis rules through an example: We analyze an authenticated key transport protocol proposed in the literature, identify several weaknesses, show how these can be exploited by various attacks, and finally, we redesign the protocol using the proposed systematic approach. We obtain a protocol that resists against the presented attacks, and in addition, it is simpler than the original one. In the second part of the thesis, we propose an original formal model for exchange protocols, which is based on game theory. In this model, an exchange protocol is represented as a set of strategies in a game played by the protocol parties and the network that they use to communicate with each other. We give formal definitions for various properties of exchange protocols in this model, including rationality and fairness. Most importantly, rationality is defined in terms of a Nash equilibrium in the protocol game. The model and the formal definitions allow us to rigorously study the relationship between rational exchange and fair exchange, and to prove that fairness implies rationality (given that the protocol satisfies some further usual properties), but the reverse is not true in general. We illustrate how the formal model can be used for rigorous verification of existing protocols by analyzing two exchange protocols, and formally proving that they satisfy the definition of rational exchange. We also present an original application of rational exchange: We show how the concept of rationality can be used to improve a family of micropayment schemes with respect to fairness without substantial loss in efficiency. Finally, in the third part of the thesis, we extend the concept of rational exchange, and describe how similar ideas can be used to stimulate the nodes of a self-organizing ad hoc network for cooperation. More precisely, we propose an original approach to stimulate the nodes for packet forwarding. Like in rational exchange protocols, our design does not guarantee that a node cannot deny packet forwarding, but it ensures that it cannot gain any advantages by doing so. We analyze the proposed solution analytically and by means of simulation