221 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Formal Methods for Trustworthy Voting Systems : From Trusted Components to Reliable Software

    Get PDF
    Voting is prominently an important part of democratic societies, and its outcome may have a dramatic and broad impact on societal progress. Therefore, it is paramount that such a society has extensive trust in the electoral process, such that the system’s functioning is reliable and stable with respect to the expectations within society. Yet, with or without the use of modern technology, voting is full of algorithmic and security challenges, and the failure to address these challenges in a controlled manner may produce fundamental flaws in the voting system and potentially undermine critical societal aspects. In this thesis, we argue for a development process of voting systems that is rooted in and assisted by formal methods that produce transparently checkable evidence for the guarantees that the final system should provide so that it can be deemed trustworthy. The goal of this thesis is to advance the state of the art in formal methods that allow to systematically develop trustworthy voting systems that can be provenly verified. In the literature, voting systems are modeled in the following four comparatively separable and distinguishable layers: (1) the physical layer, (2) the computational layer, (3) the election layer, and (4) the human layer. Current research usually either mostly stays within one of those layers or lacks machine-checkable evidence, and consequently, trusted and understandable criteria often lack formally proven and checkable guarantees on software-level and vice versa. The contributions in this work are formal methods that fill in the trust gap between the principal election layer and the computational layer by a reliable translation of trusted and understandable criteria into trustworthy software. Thereby, we enable that executable procedures can be formally traced back and understood by election experts without the need for inspection on code level, and trust can be preserved to the trustworthy system. The works in this thesis all contribute to this end and consist in five distinct contributions, which are the following: (I) a method for the generation of secure card-based communication schemes, (II) a method for the synthesis of reliable tallying procedures, (III) a method for the efficient verification of reliable tallying procedures, (IV) a method for the computation of dependable election margins for reliable audits, (V) a case study about the security verification of the GI voter-anonymization software. These contributions span formal methods on illustrative examples for each of the three principal components, (1) voter-ballot box communication, (2) election method, and (3) election management, between the election layer and the computational layer. Within the first component, the voter-ballot box communication channel, we build a bridge from the communication channel to the cryptography scheme by automatically generating secure card-based schemes from a small formal model with a parameterization of the desired security requirements. For the second component, the election method, we build a bridge from the election method to the tallying procedure by (1) automatically synthesizing a runnable tallying procedure from the desired requirements given as properties that capture the desired intuitions or regulations of fairness considerations, (2) automatically generating either comprehensible arguments or bounded proofs to compare tallying procedures based on user-definable fairness properties, and (3) automatically computing concrete election margins for a given tallying procedure, the collected ballots, and the computed election result, that enable efficient election audits. Finally, for the third and final component, the election management system, we perform a case study and apply state-of-the-art verification technology to a real-world e-voting system that has been used for the annual elections of the German Informatics Society (GI – “Gesellschaft für Informatik”) in 2019. The case study consists in the formal implementation-level security verification that the voter identities are securely anonymized and the voters’ passwords cannot be leaked. The presented methods assist the systematic development and verification of provenly trustworthy voting systems across traditional layers, i.e., from the election layer to the computational layer. They all pursue the goal of making voting systems trustworthy by reliable and explainable formal requirements. We evaluate the devised methods on minimal card-based protocols that compute a secure AND function for two different decks of cards, a classical knock-out tournament and several Condorcet rules, various plurality, scoring, and Condorcet rules from the literature, the Danish national parliamentary elections in 2015, and a state-of-the-art electronic voting system that is used for the German Informatics Society’s annual elections in 2019 and following

    LIPIcs, Volume 258, SoCG 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 258, SoCG 2023, Complete Volum

    Cryptographic Protocols for Privacy Enhancing Technologies: From Privacy Preserving Human Attestation to Internet Voting

    Get PDF
    Desire of privacy is oftentimes associated with the intention to hide certain aspects of our thoughts or actions due to some illicit activity. This is a narrow understanding of privacy, and a marginal fragment of the motivations for undertaking an action with a desired level of privacy. The right for not being subject to arbitrary interference of our privacy is part of the universal declaration of human rights (Article 12) and, above that, a requisite for our freedom. Developing as a person freely, which results in the development of society, requires actions to be done without a watchful eye. While the awareness of privacy in the context of modern technologies is not widely spread, it is clearly understood, as can be seen in the context of elections, that in order to make a free choice one needs to maintain its privacy. So why demand privacy when electing our government, but not when selecting our daily interests, books we read, sites we browse, or persons we encounter? It is popular belief that the data that we expose of ourselves would not be exploited if one is a law-abiding citizen. No further from the truth, as this data is used daily for commercial purposes: users’ data has value. To make matters worse, data has also been used for political purposes without the user’s consent or knowledge. However, the benefits that data can bring to individuals seem endless and a solution of not using this data at all seems extremist. Legislative efforts have tried, in the past years, to provide mechanisms for users to decide what is done with their data and define a framework where companies can use user data, but always under the consent of the latter. However, these attempts take time to take track, and have unfortunately not been very successful since their introduction. In this thesis we explore the possibility of constructing cryptographic protocols to provide a technical, rather than legislative, solution to the privacy problem. In particular we focus on two aspects of society: browsing and internet voting. These two events shape our lives in one way or another, and require high levels of privacy to provide a safe environment for humans to act upon them freely. However, these two problems have opposite solutions. On the one hand, elections are a well established event in society that has been around for millennia, and privacy and accountability are well rooted requirements for such events. This might be the reason why its digitalisation is something which is falling behind with respect to other acts of our society (banking, shopping, reading, etc). On the other hand, browsing is a recently introduced action, but that has quickly taken track given the amount of possibilities that it opens with such ease. We now have access to whatever we can imagine (except for voting) at the distance of a click. However, the data that we generate while browsing is extremely sensitive, and most of it is disclosed to third parties under the claims of making the user experience better (targeted recommendations, ads or bot-detection). Chapter 1 motivates why resolving such a problem is necessary for the progress of digital society. It then introduces the problem that this thesis aims to resolve, together with the methodology. In Chapter 2 we introduce some technical concepts used throughout the thesis. Similarly, we expose the state-of-the-art and its limitations. In Chapter 3 we focus on a mechanism to provide private browsing. In particular, we focus on how we can provide a safer, and more private way, for human attestation. Determining whether a user is a human or a bot is important for the survival of an online world. However, the existing mechanisms are either invasive or pose a burden to the user. We present a solution that is based on a machine learning model to distinguish between humans and bots that uses natural events of normal browsing (such as touch the screen of a phone) to make its prediction. To ensure that no private data leaves the user’s device, we evaluate such a model in the device rather than sending the data over the wire. To provide insurance that the expected model has been evaluated, the user’s device generates a cryptographic proof. However this opens an important question. Can we achieve a high level of accuracy without resulting in a noneffective battery consumption? We provide a positive answer to this question in this work, and show that a privacy-preserving solution can be achieved while maintaining the accuracy high and the user’s performance overhead low. In Chapter 4 we focus on the problem of internet voting. Internet voting means voting remotely, and therefore in an uncontrolled environment. This means that anyone can be voting under the supervision of a coercer, which makes the main goal of the protocols presented to be that of coercionresistance. We need to build a protocol that allows a voter to escape the act of coercion. We present two proposals with the main goal of providing a usable, and scalable coercion resistant protocol. They both have different trade-offs. On the one hand we provide a coercion resistance mechanism that results in linear filtering, but that provides a slightly weaker notion of coercion-resistance. Secondly, we present a mechanism with a slightly higher complexity (poly-logarithmic) but that instead provides a stronger notion of coercion resistance. Both solutions are based on a same idea: allowing the voter to cast several votes (such that only the last one is counted) in a way that cannot be determined by a coercer. Finally, in Chapter 5, we conclude the thesis, and expose how our results push one step further the state-of-the-art. We concisely expose our contributions, and describe clearly what are the next steps to follow. The results presented in this work argue against the two main claims against privacy preserving solutions: either that privacy is not practical or that higher levels of privacy result in lower levels of security.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Agustín Martín Muñoz.- Secretario: José María de Fuentes García-Romero de Tejada.- Vocal: Alberto Peinado Domíngue

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Preuves mécanisées de protocoles cryptographiques et leur lien avec des implémentations vérifiées

    Get PDF
    Cryptographic protocols are one of the foundations for the trust people put in computer systems nowadays, be it online banking, any web or cloud services, or secure messaging. One of the best theoretical assurances for cryptographic protocol security is reached through proofs in the computational model. Writing such proofs is prone to subtle errors that can lead to invalidation of the security guarantees and, thus, to undesired security breaches. Proof assistants strive to improve this situation, have got traction, and have increasingly been used to analyse important real-world protocols and to inform their development. Writing proofs using such assistants requires a substantial amount of work. It is an ongoing endeavour to extend their scope through, for example, more automation and detailed modelling of cryptographic building blocks. This thesis shows on the example of the CryptoVerif proof assistant and two case studies, that mechanized cryptographic proofs are practicable and useful in analysing and designing complex real-world protocols.The first case study is on the free and open source Virtual Private Network (VPN) protocol WireGuard that has recently found its way into the Linux kernel. We contribute proofs for several properties that are typical for secure channel protocols. Furthermore, we extend CryptoVerif with a model of unprecedented detail of the popular Diffie-Hellman group Curve25519 used in WireGuard.The second case study is on the new Internet standard Hybrid Public Key Encryption (HPKE), that has already been picked up for use in a privacy-enhancing extension of the TLS protocol (ECH), and in the Messaging Layer Security secure group messaging protocol. We accompanied the development of this standard from its early stages with comprehensive formal cryptographic analysis. We provided constructive feedback that led to significant improvements in its cryptographic design. Eventually, we became an official co-author. We conduct a detailed cryptographic analysis of one of HPKE's modes, published at Eurocrypt 2021, an encouraging step forward to make mechanized cryptographic proofs more accessible to the broader cryptographic community.The third contribution of this thesis is of methodological nature. For practical purposes, security of implementations of cryptographic protocols is crucial. However, there is frequently a gap between a cryptographic security analysis and an implementation that have both been based on a protocol specification: no formal guarantee exists that the two interpretations of the specification match, and thus, it is unclear if the executable implementation has the guarantees proved by the cryptographic analysis. In this thesis, we close this gap for proofs written in CryptoVerif and implementations written in F*. We develop cv2fstar, a compiler from CryptoVerif models to executable F* specifications using the HACL* verified cryptographic library as backend. cv2fstar translates non-cryptographic assumptions about, e.g., message formats, from the CryptoVerif model to F* lemmas. This allows to prove these assumptions for the specific implementation, further deepening the formal link between the two analysis frameworks. We showcase cv2fstar on the example of the Needham-Schroeder-Lowe protocol. cv2fstar connects CryptoVerif to the large F* ecosystem, eventually allowing to formally guarantee cryptographic properties on verified, efficient low-level code.Les protocoles cryptographiques sont l'un des fondements de la confiance que la société accorde aujourd'hui aux systèmes informatiques, qu'il s'agisse de la banque en ligne, d'un service web, ou de la messagerie sécurisée. Une façon d'obtenir des garanties théoriques fortes sur la sécurité des protocoles cryptographiques est de les prouver dans le modèle calculatoire. L'écriture de ces preuves est délicate : des erreurs subtiles peuvent entraîner l'invalidation des garanties de sécurité et, par conséquent, des failles de sécurité. Les assistants de preuve visent à améliorer cette situation. Ils ont gagné en popularité et ont été de plus en plus utilisés pour analyser des protocoles importants du monde réel, et pour contribuer à leur développement. L'écriture de preuves à l'aide de tels assistants nécessite une quantité substantielle de travail. Un effort continu est nécessaire pour étendre leur champ d'application, par exemple, par une automatisation plus poussée et une modélisation plus détaillée des primitives cryptographiques. Cette thèse montre sur l'exemple de l'assistant de preuve CryptoVerif et deux études de cas, que les preuves cryptographiques mécanisées sont praticables et utiles pour analyser et concevoir des protocoles complexes du monde réel. La première étude de cas porte sur le protocole de réseau virtuel privé (VPN) libre et open source WireGuard qui a récemment été intégré au noyau Linux. Nous contribuons des preuves pour plusieurs propriétés typiques des protocoles de canaux sécurisés. En outre, nous étendons CryptoVerif avec un modèle d'un niveau de détail sans précédent du groupe Diffie-Hellman populaire Curve25519 utilisé dans WireGuard. La deuxième étude de cas porte sur la nouvelle norme Internet Hybrid Public Key Encryption (HPKE), qui est déjà utilisée dans une extension du protocole TLS destinée à améliorer la protection de la vie privée (ECH), et dans Messaging Layer Security, un protocole de messagerie de groupe sécurisée. Nous avons accompagné le développement de cette norme dès les premiers stades avec une analyse cryptographique formelle. Nous avons fourni des commentaires constructifs ce qui a conduit à des améliorations significatives dans sa conception cryptographique. Finalement, nous sommes devenus un co-auteur officiel. Nous effectuons une analyse cryptographique détaillée de l'un des modes de HPKE, publiée à Eurocrypt 2021, un pas encourageant pour rendre les preuves cryptographiques mécanisées plus accessibles à la communauté des cryptographes. La troisième contribution de cette thèse est de nature méthodologique. Pour des utilisations pratiques, la sécurité des implémentations de protocoles cryptographiques est cruciale. Cependant, il y a souvent un écart entre l'analyse de la sécurité cryptographique et l'implémentation, tous les deux basées sur la même spécification d'un protocole : il n'existe pas de garantie formelle que les deux interprétations de la spécification correspondent, et donc, il n'est pas clair si l'implémentation exécutable a les garanties prouvées par l'analyse cryptographique. Dans cette thèse, nous comblons cet écart pour les preuves écrites en CryptoVerif et les implémentations écrites en F*. Nous développons cv2fstar, un compilateur de modèles CryptoVerif vers des spécifications exécutables F* en utilisant la bibliothèque cryptographique vérifiée HACL* comme fournisseur de primitives cryptographiques. cv2fstar traduit les hypothèses non cryptographiques concernant, par exemple, les formats de messages, du modèle CryptoVerif vers des lemmes F*. Cela permet de prouver ces hypothèses pour l'implémentation spécifique, ce qui approfondit le lien formel entre les deux cadres d'analyse. Nous présentons cv2fstar sur l'exemple du protocole Needham-Schroeder-Lowe. cv2fstar connecte CryptoVerif au grand écosystème F*, permettant finalement de garantir formellement des propriétés cryptographiques sur du code de bas niveau efficace vérifié

    Protecting Systems From Exploits Using Language-Theoretic Security

    Get PDF
    Any computer program processing input from the user or network must validate the input. Input-handling vulnerabilities occur in programs when the software component responsible for filtering malicious input---the parser---does not perform validation adequately. Consequently, parsers are among the most targeted components since they defend the rest of the program from malicious input. This thesis adopts the Language-Theoretic Security (LangSec) principle to understand what tools and research are needed to prevent exploits that target parsers. LangSec proposes specifying the syntactic structure of the input format as a formal grammar. We then build a recognizer for this formal grammar to validate any input before the rest of the program acts on it. To ensure that these recognizers represent the data format, programmers often rely on parser generators or parser combinators tools to build the parsers. This thesis propels several sub-fields in LangSec by proposing new techniques to find bugs in implementations, novel categorizations of vulnerabilities, and new parsing algorithms and tools to handle practical data formats. To this end, this thesis comprises five parts that tackle various tenets of LangSec. First, I categorize various input-handling vulnerabilities and exploits using two frameworks. First, I use the mismorphisms framework to reason about vulnerabilities. This framework helps us reason about the root causes leading to various vulnerabilities. Next, we built a categorization framework using various LangSec anti-patterns, such as parser differentials and insufficient input validation. Finally, we built a catalog of more than 30 popular vulnerabilities to demonstrate the categorization frameworks. Second, I built parsers for various Internet of Things and power grid network protocols and the iccMAX file format using parser combinator libraries. The parsers I built for power grid protocols were deployed and tested on power grid substation networks as an intrusion detection tool. The parser I built for the iccMAX file format led to several corrections and modifications to the iccMAX specifications and reference implementations. Third, I present SPARTA, a novel tool I built that generates Rust code that type checks Portable Data Format (PDF) files. The type checker I helped build strictly enforces the constraints in the PDF specification to find deviations. Our checker has contributed to at least four significant clarifications and corrections to the PDF 2.0 specification and various open-source PDF tools. In addition to our checker, we also built a practical tool, PDFFixer, to dynamically patch type errors in PDF files. Fourth, I present ParseSmith, a tool to build verified parsers for real-world data formats. Most parsing tools available for data formats are insufficient to handle practical formats or have not been verified for their correctness. I built a verified parsing tool in Dafny that builds on ideas from attribute grammars, data-dependent grammars, and parsing expression grammars to tackle various constructs commonly seen in network formats. I prove that our parsers run in linear time and always terminate for well-formed grammars. Finally, I provide the earliest systematic comparison of various data description languages (DDLs) and their parser generation tools. DDLs are used to describe and parse commonly used data formats, such as image formats. Next, I conducted an expert elicitation qualitative study to derive various metrics that I use to compare the DDLs. I also systematically compare these DDLs based on sample data descriptions available with the DDLs---checking for correctness and resilience

    Post-quantum security of hash functions

    Get PDF
    The research covered in this thesis is dedicated to provable post-quantum security of hash functions. Post-quantum security provides security guarantees against quantum attackers. We focus on analyzing the sponge construction, a cryptographic construction used in the standardized hash function SHA3. Our main results are proving a number of quantum security statements. These include standard-model security: collision-resistance and collapsingness, and more idealized notions such as indistinguishability and indifferentiability from a random oracle. All these results concern quantum security of the classical cryptosystems. From a more high-level perspective we find new applications and generalize several important proof techniques in post-quantum cryptography. We use the polynomial method to prove quantum indistinguishability of the sponge construction. We also develop a framework for quantum game-playing proofs, using the recently introduced techniques of compressed random oracles and the One-way-To-Hiding lemma. To establish the usefulness of the new framework we also prove a number of quantum indifferentiability results for other cryptographic constructions. On the way to these results, though, we address an open problem concerning quantum indifferentiability. Namely, we disprove a conjecture that forms the basis of a no-go theorem for a version of quantum indifferentiability
    corecore