11 research outputs found

    Secure Grouping Protocol Using a Deck of Cards

    Full text link
    We consider a problem, which we call secure grouping, of dividing a number of parties into some subsets (groups) in the following manner: Each party has to know the other members of his/her group, while he/she may not know anything about how the remaining parties are divided (except for certain public predetermined constraints, such as the number of parties in each group). In this paper, we construct an information-theoretically secure protocol using a deck of physical cards to solve the problem, which is jointly executable by the parties themselves without a trusted third party. Despite the non-triviality and the potential usefulness of the secure grouping, our proposed protocol is fairly simple to describe and execute. Our protocol is based on algebraic properties of conjugate permutations. A key ingredient of our protocol is our new techniques to apply multiplication and inverse operations to hidden permutations (i.e., those encoded by using face-down cards), which would be of independent interest and would have various potential applications

    On language classes accepted by stateless 5′ → 3′ Watson-Crick finite automata

    Get PDF
    Watson-Crick automata are belonging to the natural computing paradigm as these finite automata are working on strings representing DNA molecules. Watson-Crick automata have two reading heads, and in the 5 ′ → 3 ′ models these two heads start from the two extremes of the input. This is well motivated by the fact that DNA strands have 5 ′ and 3 ′ ends based on the fact which carbon atoms of the sugar group is used in the covalent bonds to continue the strand. However, in the two stranded DNA, the directions of the strands are opposite, so that, if an enzyme would read the strand it may read each strand in its 5 ′ to 3 ′ direction, which means physically opposite directions starting from the two extremes of the molecule. On the other hand, enzymes may not have inner states, thus those Watson-Crick automata which are stateless (i.e. have exactly one state) are more realistic from this point of view. In this paper these stateless 5 ′ → 3 ′ Watson-Crick automata are studied and some properties of the language classes accepted by their variants are proven. We show hierarchy results, and also a “pumping”, i.e., iteration result for these languages that can be used to prove that some languages may not be in the class accepted by the class of stateless 5 ′ → 3 ′ Watson-Crick automata

    On language classes accepted by stateless 5′ → 3′ Watson-Crick finite automata

    Get PDF
    Watson-Crick automata are belonging to the natural computing paradigm as these finite automata are working on strings representing DNA molecules. Watson-Crick automata have two reading heads, and in the 5 ′ → 3 ′ models these two heads start from the two extremes of the input. This is well motivated by the fact that DNA strands have 5 ′ and 3 ′ ends based on the fact which carbon atoms of the sugar group is used in the covalent bonds to continue the strand. However, in the two stranded DNA, the directions of the strands are opposite, so that, if an enzyme would read the strand it may read each strand in its 5 ′ to 3 ′ direction, which means physically opposite directions starting from the two extremes of the molecule. On the other hand, enzymes may not have inner states, thus those Watson-Crick automata which are stateless (i.e. have exactly one state) are more realistic from this point of view. In this paper these stateless 5 ′ → 3 ′ Watson-Crick automata are studied and some properties of the language classes accepted by their variants are proven. We show hierarchy results, and also a “pumping”, i.e., iteration result for these languages that can be used to prove that some languages may not be in the class accepted by the class of stateless 5 ′ → 3 ′ Watson-Crick automata

    P Systems: from Anti-Matter to Anti-Rules

    Get PDF
    The concept of a matter object being annihilated when meeting its corresponding anti-matter object is taken over for rule labels as objects and anti-rule labels as the corresponding annihilation counterpart in P systems. In the presence of a corresponding anti-rule object, annihilation of a rule object happens before the rule that the rule object represents, can be applied. Applying a rule consumes the corresponding rule object, but may also produce new rule objects as well as anti-rule objects, too. Computational completeness in this setting then can be obtained in a one-membrane P system with non-cooperative rules and rule / anti-rule annihilation rules when using one of the standard maximally parallel derivation modes as well as any of the maximally parallel set derivation modes (i.e., non-extendable (multi)sets of rules, (multi)sets with maximal number of rules, (multi)sets of rules a ecting the maximal number of objects). When using the sequential derivation mode, at least the computational power of partially blind register machines is obtained

    Annual Report, 2013-2014

    Get PDF
    Beginning in 2004/2005- issued in online format onl

    Computer Science 2019 APR Self-Study & Documents

    Get PDF
    UNM Computer Science APR self-study report and review team report for Spring 2019, fulfilling requirements of the Higher Learning Commission

    Cryptographic Protocols from Physical Assumptions

    Get PDF
    Moderne Kryptographie erlaubt nicht nur, personenbezogene Daten im Internet zu schützen oder sich für bestimmte Dienste zu authentifizieren, sondern ermöglicht auch das Auswerten einer Funktion auf geheimen Eingaben mehrerer Parteien, ohne dass dabei etwas über diese Eingaben gelernt werden kann (mit der Ausnahme von Informationen, die aus der Ausgabe und eigenen Eingaben effizient abgeleitet werden können). Kryptographische Protokolle dieser Art werden sichere Mehrparteienberechnung genannt und eignen sich für ein breites Anwendungsspektrum, wie z.B. geheime Abstimmungen und Auktionen. Um die Sicherheit solcher Protokolle zu beweisen, werden Annahmen benötigt, die oft komplexitätstheoretischer Natur sind, beispielsweise, dass es schwierig ist, hinreichend große Zahlen zu faktorisieren. Sicherheitsannahmen, die auf physikalischen Prinzipien basieren, bieten im Gegensatz zu komplexitätstheoretischen Annahmen jedoch einige Vorteile: die Protokolle sind meist konzeptionell einfacher, die Sicherheit ist unabhängig von den Berechnungskapazitäten des Angreifers, und die Funktionsweise und Sicherheit ist oft für den Menschen leichter nachvollziehbar. (Zum Beispiel forderte das Bundesverfassungsgericht: „Beim Einsatz elektronischer Wahlgeräte müssen die wesentlichen Schritte der Wahlhandlung und der Ergebnisermittlung vom Bürger zuverlässig und ohne besondere Sachkenntnis überprüft werden können.“ (BVerfG, Urteil des Zweiten Senats vom 03. März 2009)). Beispiele für solche Annahmen sind physikalisch getrennte oder unkorrumpierbare Hardware-Komponenten (vgl. Broadnax et al., 2018), Write-Only-Geräte für Logging, oder frei zu rubbelnde Felder, wie man sie von PIN-Briefen kennt. Auch die aus der Quantentheorie folgende Nicht-Duplizierbarkeit von Quantenzuständen ist eine physikalische Sicherheitsannahme, die z.B. verwendet wird, um nicht-klonbares „Quantengeld“ zu realisieren. In der vorliegenden Dissertation geht es neben Protokollen, die die Sicherheit und Isolation bestimmter einfacher Hardware-Komponenten als Vertrauensanker verwenden, im Besonderen um kryptographischen Protokolle für die sichere Mehrparteienberechnung, die mit Hilfe physikalischer Spielkarten durchgeführt werden. Die Sicherheitsannahme besteht darin, dass die Karten ununterscheidbare Rückseiten haben und, dass bestimmte Mischoperationen sicher durchgeführt werden können. Eine Anwendung dieser Protokolle liegt also in der Veranschaulichung von Kryptographie und in der Ermöglichung sicherer Mehrparteienberechnungen, die gänzlich ohne Computer ausgeführt werden können. Ein Ziel in diesem Bereich der Kryptographie ist es, Protokolle anzugeben, die möglichst wenige Karten benötigen – und sie als optimal in diesem Sinne zu beweisen. Abhängig von Anforderungen an das Laufzeitverhalten (endliche vs. lediglich im Erwartungswert endliche Laufzeit) und an die Praktikabilität der eingesetzten Mischoperationen, ergeben sich unterschiedliche untere Schranken für die mindestens benötigte Kartenanzahl. Im Rahmen der Arbeit wird für jede Kombination dieser Anforderungen ein UND-Protokoll – ein logisches UND zweier in Karten codierter Bits; dieses ist zusammen mit der Negation und dem Kopieren von Bits hinreichend für die Realisierung allgemeiner Schaltkreise – konstruiert oder in der Literatur identifiziert, das mit der minimalen Anzahl an Karten auskommt, und dies auch als Karten-minimal bewiesen. Insgesamt ist UND mit vier (für erwartet endliche Laufzeit (Koch, Walzer und Härtel, 2015; Koch, 2018)), fünf (für praktikable Mischoperationen oder endliche Laufzeit (Koch, Walzer und Härtel, 2015; Koch, 2018)) oder sechs Karten (für endliche Laufzeit und gleichzeitig praktikable Mischoperationen (Kastner et al., 2017)) möglich und optimal. Für die notwendigen Struktureinsichten wurden so-genannte „Zustandsdiagramme“ mit zugehörigen Kalkülregeln entwickelt, die eine graphenbasierte Darstellung aller möglichen Protokolldurchläufe darstellen und an denen Korrektheit und Sicherheit der Protokolle direkt ablesbar sind (Koch, Walzer und Härtel, 2015; Kastner et al., 2017). Dieser Kalkül hat seitdem eine breite Verwendung in der bereichsrelevanten Literatur gefunden. (Beweise für untere Schranken bzgl. der Kartenanzahl werden durch den Kalkül zu Beweisen, die zeigen, dass bestimmte Protokollzustände in einer bestimmten kombinatorischen Graphenstruktur nicht erreichbar sind.) Mit Hilfe des Kalküls wurden Begriffe der Spielkartenkryptographie als C-Programm formalisiert und (unter bestimmten Einschränkungen) mit einem „Software Bounded Model Checking“-Ansatz die Längenminimalität eines kartenminimalen UND-Protokolls bewiesen (Koch, Schrempp und Kirsten, 2019). Darüber hinaus werden konzeptionell einfache Protokolle für den Fall einer sicheren Mehrparteienberechnung angegeben, bei der sogar zusätzlich die zu berechnende Funktion geheim bleiben soll (Koch und Walzer, 2018), und zwar für jedes der folgenden Berechnungsmodelle: (universelle) Schaltkreise, binäre Entscheidungsdiagramme, Turingmaschinen und RAM-Maschinen. Es wird zudem untersucht, wie Karten-basierte Protokolle so ausgeführt werden können, dass die einzige Interaktion darin besteht, dass andere Parteien die korrekte Ausführung überwachen. Dies ermöglicht eine (schwach interaktive) Programm-Obfuszierung, bei der eine Partei ein durch Karten codiertes Programm auf eigenen Eingaben ausführen kann, ohne etwas über dessen interne Funktionsweise zu lernen, das über das Ein-/Ausgabeverhalten hinaus geht. Dies ist ohne derartige physikalische Annahmen i.A. nicht möglich. Zusätzlich wird eine Sicherheit gegen Angreifer, die auch vom Protokoll abweichen dürfen, formalisiert und es wird eine Methode angegeben um unter möglichst schwachen Sicherheitsannahmen ein passiv sicheres Protokoll mechanisch in ein aktiv sicheres zu transformieren (Koch und Walzer, 2017). Eine weitere, in der Dissertation untersuchte physikalische Sicherheitsannahme, ist die Annahme primitiver, unkorrumpierbarer Hardware-Bausteine, wie z.B. einen TAN-Generator. Dies ermöglicht z.B. eine sichere Authentifikation des menschlichen Nutzers über ein korrumpiertes Terminal, ohne dass der Nutzer selbst kryptographische Berechnungen durchführen muss (z.B. große Primzahlen zu multiplizieren). Dies wird am Beispiel des Geldabhebens an einem korrumpierten Geldautomaten mit Hilfe eines als sicher angenommenen zweiten Geräts (Achenbach et al., 2019) und mit möglichst schwachen Anforderungen an die vorhandenen Kommunikationskanäle gelöst. Da das angegebene Protokoll auch sicher ist, wenn es beliebig mit anderen gleichzeitig laufenden Protokollen ausgeführt wird (also sogenannte Universelle Komponierbarkeit aufweist), es modular entworfen wurde, und die Sicherheitsannahme glaubwürdig ist, ist die Funktionsweise für den Menschen transparent und nachvollziehbar. Insgesamt bildet die Arbeit durch die verschiedenen Karten-basierten Protokolle, Kalküle und systematisierten Beweise für untere Schranken bzgl. der Kartenanzahl, sowie durch Ergebnisse zur sicheren Verwendung eines nicht-vertrauenswürdigen Terminals, und einer Einordnung dieser in eine systematische Darstellung der verschiedenen, in der Kryptographie verwendeten physikalischen Annahmen, einen wesentlichen Beitrag zur physikalisch-basierten Kryptographie

    Formal Methods for Trustworthy Voting Systems : From Trusted Components to Reliable Software

    Get PDF
    Voting is prominently an important part of democratic societies, and its outcome may have a dramatic and broad impact on societal progress. Therefore, it is paramount that such a society has extensive trust in the electoral process, such that the system’s functioning is reliable and stable with respect to the expectations within society. Yet, with or without the use of modern technology, voting is full of algorithmic and security challenges, and the failure to address these challenges in a controlled manner may produce fundamental flaws in the voting system and potentially undermine critical societal aspects. In this thesis, we argue for a development process of voting systems that is rooted in and assisted by formal methods that produce transparently checkable evidence for the guarantees that the final system should provide so that it can be deemed trustworthy. The goal of this thesis is to advance the state of the art in formal methods that allow to systematically develop trustworthy voting systems that can be provenly verified. In the literature, voting systems are modeled in the following four comparatively separable and distinguishable layers: (1) the physical layer, (2) the computational layer, (3) the election layer, and (4) the human layer. Current research usually either mostly stays within one of those layers or lacks machine-checkable evidence, and consequently, trusted and understandable criteria often lack formally proven and checkable guarantees on software-level and vice versa. The contributions in this work are formal methods that fill in the trust gap between the principal election layer and the computational layer by a reliable translation of trusted and understandable criteria into trustworthy software. Thereby, we enable that executable procedures can be formally traced back and understood by election experts without the need for inspection on code level, and trust can be preserved to the trustworthy system. The works in this thesis all contribute to this end and consist in five distinct contributions, which are the following: (I) a method for the generation of secure card-based communication schemes, (II) a method for the synthesis of reliable tallying procedures, (III) a method for the efficient verification of reliable tallying procedures, (IV) a method for the computation of dependable election margins for reliable audits, (V) a case study about the security verification of the GI voter-anonymization software. These contributions span formal methods on illustrative examples for each of the three principal components, (1) voter-ballot box communication, (2) election method, and (3) election management, between the election layer and the computational layer. Within the first component, the voter-ballot box communication channel, we build a bridge from the communication channel to the cryptography scheme by automatically generating secure card-based schemes from a small formal model with a parameterization of the desired security requirements. For the second component, the election method, we build a bridge from the election method to the tallying procedure by (1) automatically synthesizing a runnable tallying procedure from the desired requirements given as properties that capture the desired intuitions or regulations of fairness considerations, (2) automatically generating either comprehensible arguments or bounded proofs to compare tallying procedures based on user-definable fairness properties, and (3) automatically computing concrete election margins for a given tallying procedure, the collected ballots, and the computed election result, that enable efficient election audits. Finally, for the third and final component, the election management system, we perform a case study and apply state-of-the-art verification technology to a real-world e-voting system that has been used for the annual elections of the German Informatics Society (GI – “Gesellschaft für Informatik”) in 2019. The case study consists in the formal implementation-level security verification that the voter identities are securely anonymized and the voters’ passwords cannot be leaked. The presented methods assist the systematic development and verification of provenly trustworthy voting systems across traditional layers, i.e., from the election layer to the computational layer. They all pursue the goal of making voting systems trustworthy by reliable and explainable formal requirements. We evaluate the devised methods on minimal card-based protocols that compute a secure AND function for two different decks of cards, a classical knock-out tournament and several Condorcet rules, various plurality, scoring, and Condorcet rules from the literature, the Danish national parliamentary elections in 2015, and a state-of-the-art electronic voting system that is used for the German Informatics Society’s annual elections in 2019 and following

    Unconventional Computation and Natural Computation

    No full text
    Starting in 2012, the conference series previously known as Unconventional Com- putation (UC) changed its name to Unconventional Computation and Natural Computation (UCNC). The name change was initiated to reflect the evolution in the variety of fields in the past decade or so. The series is genuinely interdisciplinary and it covers theory as well as experiments and applications. It is concerned with computation that goes beyond the classic Turing model, such as human-designed computation inspired by nature, and with the computational properties of processes taking place in nature. The topics of the conference typically include: quantum, cellular, molecular, neural, DNA, membrane, and evolutionary computing; cellular automata; computation based on chaos and dynamical systems; massive parallel computation; collective intelligence; computation based on physical principles such as relativistic, optical, spatial, collision-based computing; amorphous computing; physarum computing; hypercomputation; fuzzy and rough computing; swarm intelligence; artificial immune systems; physics of computation; chemical computation; evolving hardware; the computational nature of self-assembly, developmental processes, bacterial communication, and brain processes. The first venue of the UCNC (previously UC) series was Auckland, New Zealand, in 1998. Subsequent sites of the conference were Brussels, Belgium, in 2000; Kobe, Japan, in 2002; Seville, Spain, in 2005; York, UK, in 2006; Kingston, Canada, in 2007; Vienna, Austria, in 2008; Ponta Delgada, Portugal, in 2009; Tokyo, Japan, in 2010; Turku, Finland, in 2011 and Orl \u301eans, France, in 2012. Each meeting was accompanied by its own proceedings. The 12th conference in the series, UCNC 2013, was organized in 2013 in Milan (Italy) by the Department of Informatics, Systems and Communication (DISCo) on the beautiful campus of the University of Milano-Bicocca during July 1\u20135, 2013. Milan is situated in the north of Italy, in the middle of the vast area of the Padan Plain, in a truly strategic position for the paths that lead to the heart of Europe. It is the Italian capital of finance and advanced tertiary sector. Milan is truly one of the few \u201ccomplete\u201d Italian cities, able to reconcile economic and social realities. It is active in many fields of culture and research. It is a busy and advanced metropolis that attracts millions of people every year, offering a multitude of opportunities in the fields of education, employment, entertainment, and tourism. The roots of Milan are planted in a past that has bestowed on us an artistic and cultural heritage; this is not rare for towns in Italy, but not all of them have so much to offer: \u2013 The world-famous L\u2019Ultima Cena (The Last Supper) by Leonardo Da Vinci \u2013 The Opera House, La Scala \u2013 The Sforza Castle \u2013 The numerous museums and art galleries: many of the treasures of Milan are hidden to the less attentive eyes of its inhabitants, but it is all there, waiting to be discovered Milan also has a rich calendar of events to cater for all tastes, be they cultural, recreational or sports; the city certainly has something to offer for everyone. UCNC 2013 was co-located with Computability in Europe 2013 (CiE 2013), with three common invited speakers: Gilles Brassard (Universit \u301e de Montr \u301eal), Grzegorz Rozenberg (Leiden Institute of Advanced Computer Science and University of Colorado at Boulder), and Endre Szemer \u301edi (Hungarian Academy of Sciences, Rutgers University). Other invited speakers were Enrico Formenti (Universit \u301e Nice Sophia Antipolis, France), John V. Tucker (Swansea University, UK), and Xin Yao (University of Birmingham, UK). There were 46 submissions from 26 countries including Austria, Bangladesh, Canada, Finland, France, Germany, Hungary, India, Iran, Italy, Japan, Latvia, Malaysia, Moldova, Morocco, New Zealand, Norway, Philippines, Poland, Portugal, Romania, Spain, Sweden, Turkey, UK, and the USA. Each paper was reviewed by three referees and discussed by the members of the Program Com- mittee. Finally, 20 regular papers were selected for presentation at the conference. In addition, there were eight posters on display at the conference. We warmly thank all the invited speakers and all the authors of the submitted papers. Their efforts were the basis of the success of the conference. We would like to thank all the members of the Program Committee and the external referees. Their work in evaluating the papers and comments during the discussions was essential to the decisions on the contributed papers. We would also like to thank all the members of the UCNC Steering Committee, for their ideas and efforts in forming the Program Committee and selecting the invited speakers. We wish to thank the conference sponsors: the University of Milano-Bicocca, the Italian Chapter of the European Association for Theoretical Computer Sci- ence, and Micron Foundation. The conference has a long history of hosting workshops. The 2013 edition in Milan hosted three workshops: \u2013 CoSMoS 2013, the 6th International Workshop on Complex Systems Modelling and Simulation (Monday, July 1) \u2013 BioChemIT 2013, the Third COBRA Workshop on Biological and Chemical Information Technologies (Friday, July 5) \u2013 WIVACE 2013, the Italian Workshop on Artificial Life and Evolutionary Computation (July 1\u20132
    corecore