1,479 research outputs found

    DeVoS: Deniable Yet Verifiable Vote Updating

    Get PDF
    peer reviewedInternet voting systems are supposed to meet the same high standards as traditional paper-based systems when used in real political elections: freedom of choice, universal and equal suffrage, secrecy of the ballot, and independent verifiability of the election result. Although numerous Internet voting systems have been proposed to achieve these challenging goals simultaneously, few come close in reality. We propose a novel publicly verifiable and practically efficient Internet voting system, DeVoS, that advances the state of the art. The main feature of DeVoS is its ability to protect voters' freedom of choice in several dimensions. First, voters in DeVoS can intuitively update their votes in a way that is deniable to observers but verifiable by the voters; in this way voters can secretly overwrite potentially coerced votes. Second, in addition to (basic) vote privacy, DeVoS also guarantees strong participation privacy by end-to-end hiding which voters have submitted ballots and which have not. Finally, DeVoS is fully compatible with Perfectly Private Audit Trail, a state-of-the-art Internet voting protocol with practical everlasting privacy. In combination, DeVoS offers a new way to secure free Internet elections with strong and long-term privacy properties

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Investigating the learning potential of the Second Quantum Revolution: development of an approach for secondary school students

    Get PDF
    In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world

    Formal Verification of Verifiability in E-Voting Protocols

    Get PDF
    Election verifiability is one of the main security properties of e-voting protocols, referring to the ability of independent entities, such as voters or election observers, to validate the outcome of the voting process. It can be ensured by means of formal verification that applies mathematical logic to verify the considered protocols under well-defined assumptions, specifications, and corruption scenarios. Automated tools allow an efficient and accurate way to perform formal verification, enabling comprehensive analysis of all execution scenarios and eliminating the human errors in the manual verification. The existing formal verification frameworks that are suitable for automation are not general enough to cover a broad class of e-voting protocols. They do not cover revoting and cannot be tuned to weaker or stronger levels of security that may be achievable in practice. We therefore propose a general formal framework that allows automated verification of verifiability in e-voting protocols. Our framework is easily applicable to many protocols and corruption scenarios. It also allows refined specifications of election procedures, for example accounting for revote policies. We apply our framework to the analysis of several real-world case studies, where we capture both known and new attacks, and provide new security guarantees. First, we consider Helios, a prominent web-based e-voting protocol, which aims to provide end-to-end verifiability. It is however vulnerable to ballot stuffing when the voting server is corrupt. Second, we consider Belenios, which builds upon Helios and aims to achieve stronger verifiability, preventing ballot stuffing by splitting the trust between a registrar and the server. Both of these systems have been used in many real-world elections. Our third case study is Selene, which aims to simplify the individual verification procedure for voters, providing them with trackers for verifying their votes in the clear at the end of election. Finally, we consider the Estonian e-voting protocol, that has been deployed for national elections since 2005. The protocol has continuously evolved to offer better verifiability guarantees but has no formal analysis. We apply our framework to realistic models of all these protocols, deriving the first automated formal analysis in each case. As a result, we find several new attacks, improve the corresponding protocols to address their weakness, and prove that verifiability holds for the new versions

    Towards compact bandwidth and efficient privacy-preserving computation

    Get PDF
    In traditional cryptographic applications, cryptographic mechanisms are employed to ensure the security and integrity of communication or storage. In these scenarios, the primary threat is usually an external adversary trying to intercept or tamper with the communication between two parties. On the other hand, in the context of privacy-preserving computation or secure computation, the cryptographic techniques are developed with a different goal in mind: to protect the privacy of the participants involved in a computation from each other. Specifically, privacy-preserving computation allows multiple parties to jointly compute a function without revealing their inputs and it has numerous applications in various fields, including finance, healthcare, and data analysis. It allows for collaboration and data sharing without compromising the privacy of sensitive data, which is becoming increasingly important in today's digital age. While privacy-preserving computation has gained significant attention in recent times due to its strong security and numerous potential applications, its efficiency remains its Achilles' heel. Privacy-preserving protocols require significantly higher computational overhead and bandwidth when compared to baseline (i.e., insecure) protocols. Therefore, finding ways to minimize the overhead, whether it be in terms of computation or communication, asymptotically or concretely, while maintaining security in a reasonable manner remains an exciting problem to work on. This thesis is centred around enhancing efficiency and reducing the costs of communication and computation for commonly used privacy-preserving primitives, including private set intersection, oblivious transfer, and stealth signatures. Our primary focus is on optimizing the performance of these primitives.Im Gegensatz zu traditionellen kryptografischen Aufgaben, bei denen Kryptografie verwendet wird, um die Sicherheit und Integrität von Kommunikation oder Speicherung zu gewährleisten und der Gegner typischerweise ein Außenstehender ist, der versucht, die Kommunikation zwischen Sender und Empfänger abzuhören, ist die Kryptografie, die in der datenschutzbewahrenden Berechnung (oder sicheren Berechnung) verwendet wird, darauf ausgelegt, die Privatsphäre der Teilnehmer voreinander zu schützen. Insbesondere ermöglicht die datenschutzbewahrende Berechnung es mehreren Parteien, gemeinsam eine Funktion zu berechnen, ohne ihre Eingaben zu offenbaren. Sie findet zahlreiche Anwendungen in verschiedenen Bereichen, einschließlich Finanzen, Gesundheitswesen und Datenanalyse. Sie ermöglicht eine Zusammenarbeit und Datenaustausch, ohne die Privatsphäre sensibler Daten zu kompromittieren, was in der heutigen digitalen Ära immer wichtiger wird. Obwohl datenschutzbewahrende Berechnung aufgrund ihrer starken Sicherheit und zahlreichen potenziellen Anwendungen in jüngster Zeit erhebliche Aufmerksamkeit erregt hat, bleibt ihre Effizienz ihre Achillesferse. Datenschutzbewahrende Protokolle erfordern deutlich höhere Rechenkosten und Kommunikationsbandbreite im Vergleich zu Baseline-Protokollen (d.h. unsicheren Protokollen). Daher bleibt es eine spannende Aufgabe, Möglichkeiten zu finden, um den Overhead zu minimieren (sei es in Bezug auf Rechen- oder Kommunikationsleistung, asymptotisch oder konkret), während die Sicherheit auf eine angemessene Weise gewährleistet bleibt. Diese Arbeit konzentriert sich auf die Verbesserung der Effizienz und Reduzierung der Kosten für Kommunikation und Berechnung für gängige datenschutzbewahrende Primitiven, einschließlich private Schnittmenge, vergesslicher Transfer und Stealth-Signaturen. Unser Hauptaugenmerk liegt auf der Optimierung der Leistung dieser Primitiven

    Randomness Tests for Binary Sequences

    Get PDF
    Cryptography is vital in securing sensitive information and maintaining privacy in the today’s digital world. Though sometimes underestimated, randomness plays a key role in cryptography, generating unpredictable keys and other related material. Hence, high-quality random number generators are a crucial element in building a secure cryptographic system. In dealing with randomness, two key capabilities are essential. First, creating strong random generators, that is, systems able to produce unpredictable and statistically independent numbers. Second, constructing validation systems to verify the quality of the generators. In this dissertation, we focus on the second capability, specifically analyzing the concept of hypothesis test, a statistical inference model representing a basic tool for the statistical characterization of random processes. In the hypothesis testing framework, a central idea is the p-value, a numerical measure assigned to each sample generated from the random process under analysis, allowing to assess the plausibility of a hypothesis, usually referred to as the null hypothesis, about the random process on the basis of the observed data. P-values are determined by the probability distribution associated with the null hypothesis. In the context of random number generators, this distribution is inherently discrete but in the literature it is commonly approximated by continuous distributions for ease of handling. However, analyzing in detail the discrete setting, we show that the mentioned approximation can lead to errors. As an example, we thoroughly examine the testing strategy for random number generators proposed by the National Institute of Standards and Technology (NIST) and demonstrate some inaccuracies in the suggested approach. Motivated by this finding, we define a new simple hypothesis test as a use case to propose and validate a methodology for assessing the definition and implementation correctness of hypothesis tests. Additionally, we present an abstract analysis of the hypothesis test model, which proves valuable in providing a more accurate conceptual framework within the discrete setting. We believe that the results presented in this dissertation can contribute to a better understanding of how hypothesis tests operate in discrete cases, such as analyzing random number generators. In the demanding field of cryptography, even slight discrepancies between the expected and actual behavior of random generators can, in fact, have significant implications for data security

    Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring

    Full text link
    Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, (...) In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. (...) This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022 to the University of Port

    Perceptions and Practicalities for Private Machine Learning

    Get PDF
    data they and their partners hold while maintaining data subjects' privacy. In this thesis I show that private computation, such as private machine learning, can increase end-users' acceptance of data sharing practices, but not unconditionally. There are many factors that influence end-users' privacy perceptions in this space; including the number of organizations involved and the reciprocity of any data sharing practices. End-users emphasized the importance of detailing the purpose of a computation and clarifying that inputs to private computation are not shared across organizations. End-users also struggled with the notion of protections not being guaranteed 100\%, such as in statistical based schemes, thus demonstrating a need for a thorough understanding of the risk form attacks in such applications. When training a machine learning model on private data, it is critical to understand the conditions under which that data can be protected; and when it cannot. For instance, membership inference attacks aim to violate privacy protections by determining whether specific data was used to train a particular machine learning model. Further, the successful transition of private machine learning theoretical research to practical use must account for gaps in achieving these properties that arise due to the realities of concrete implementations, threat models, and use cases; which is not currently the case

    ABBY: Automating leakage modeling for side-channels analysis

    Get PDF
    We introduce ABBY, an open-source side-channel leakage profiling framework that targets the microarchitectural layer. Existing solutions to characterize the microarchitectural layer are device-specific and require extensive manual effort. The main innovation of ABBY is the collection of data, which can automatically characterize the microarchitecture of a target device and has the additional benefit of being scalable. Using ABBY, we create two sets of data which capture the interaction of instructions for the ARM CORTEX-M0/M3 architecture. These sets are the first to capture detailed information on the microarchitectural layer. They can be used to explore various leakage models suitable for creating sidechannel leakage simulators. A preliminary evaluation of a leakage model produced with our dataset of real-world cryptographic implementations shows performance comparable to state-of-the-art leakage simulators

    Flexible Long-Term Secure Archiving

    Get PDF
    Privacy and data protection have always been basic human needs in any society that makes use of written language. From simple personal correspondence over military communication to trade secrets or medical information, confidentiality has been of utmost importance. The implications of a leak of such sensitive information may prove devastating, as the previous examples illustrate perfectly. Furthermore reliability, that is, integrity and authenticitiy of information, is critical with risks reaching from annoying to lethal as can again be seen in the previous examples. This need for data protection has carried over from the analogue to the digital age seamlessly with the amount of data being generated, transmitted and stored increasing steadily and containing more and more personal details. And in regard of the developments in computational technology that recent years have seen, such as the ongoing improvements with respect to quantum computing as well as cryptoanalytical advances, the capabilities of attackers on the security of private information have never been more distinct. Thus the need for privacy and data protection has rarely been more dire
    • …
    corecore