41 research outputs found

    On the Untapped Potential of Encoding Predicates by Arithmetic Circuits and Their Applications

    Get PDF
    Predicates are used in cryptography as a fundamental tool to control the disclosure of secrets. However, how to embed a particular predicate into a cryptographic primitive is usually not given much attention. In this work, we formalize the idea of encoding predicates as arithmetic circuits and observe that choosing the right encoding of a predicate may lead to an improvement in many aspects such as the efficiency of a scheme or the required hardness assumption. In particular, we develop two predicate encoding schemes with different properties and construct cryptographic primitives that benefit from these: verifiable random functions (VRFs) and predicate encryption (PE) schemes. - We propose two VRFs on bilinear maps. Both of our schemes are secure under a non-interactive QQ-type assumption where QQ is only poly-logarithmic in the security parameter, and they achieve either a poly-logarithmic verification key size or proof size. This is a significant improvement over prior works, where all previous schemes either require a strong hardness assumption or a large verification key and proof size. - We propose a lattice-based PE scheme for the class of \emph{multi-dimensional equality} (MultEq) predicates. This class of predicate is expressive enough to capture many of the appealing applications that motivates PE schemes. Our scheme achieves the best in terms of the required approximation factor for LWE (we only require \poly(\lambda)) and the decryption time. In particular, all existing PE schemes that support the class of MultEq predicates either require a subexponential LWE assumption or an exponential decryption time (in the dimension of the MultEq predicates)

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Statically Aggregate Verifiable Random Functions and Application to E-Lottery

    Get PDF
    Cohen, Goldwasser, and Vaikuntanathan (TCC\u2715) introduced the concept of aggregate pseudo-random functions (PRFs), which allow efficiently computing the aggregate of PRF values over exponential-sized sets. In this paper, we explore the aggregation augmentation on verifiable random function (VRFs), introduced by Micali, Rabin and Vadhan (FOCS\u2799), as well as its application to e-lottery schemes. We introduce the notion of static aggregate verifiable random functions (Agg-VRFs), which perform aggregation for VRFs in a static setting. Our contributions can be summarized as follows: (1) we define static aggregate VRFs, which allow the efficient aggregation of VRF values and the corresponding proofs over super-polynomially large sets; (2) we present a static Agg-VRF construction over bit-fixing sets with respect to product aggregation based on the q-decisional Diffie-Hellman exponent assumption; (3) we test the performance of our static Agg-VRFs instantiation in comparison to a standard (non-aggregate) VRF in terms of costing time for the aggregation and verification processes, which shows that Agg-VRFs lower considerably the timing of verification of big sets; and (4) by employing Agg-VRFs, we propose an improved e-lottery scheme based on the framework of Chow et al.\u27s VRF-based e-lottery proposal (ICCSA\u2705). We evaluate the performance of Chow et al.\u27s e-lottery scheme and our improved scheme, and the latter shows a significant improvement in the efficiency of generating the winning number and the player verification

    Program Similarity Analysis for Malware Classification and its Pitfalls

    Get PDF
    Malware classification, specifically the task of grouping malware samples into families according to their behaviour, is vital in order to understand the threat they pose and how to protect against them. Recognizing whether one program shares behaviors with another is a task that requires semantic reasoning, meaning that it needs to consider what a program actually does. This is a famously uncomputable problem, due to Rice\u2019s theorem. As there is no one-size-fits-all solution, determining program similarity in the context of malware classification requires different tools and methods depending on what is available to the malware defender. When the malware source code is readily available (or at least, easy to retrieve), most approaches employ semantic \u201cabstractions\u201d, which are computable approximations of the semantics of the program. We consider this the first scenario for this thesis: malware classification using semantic abstractions extracted from the source code in an open system. Structural features, such as the control flow graphs of programs, can be used to classify malware reasonably well. To demonstrate this, we build a tool for malware analysis, R.E.H.A. which targets the Android system and leverages its openness to extract a structural feature from the source code of malware samples. This tool is first successfully evaluated against a state of the art malware dataset and then on a newly collected dataset. We show that R.E.H.A. is able to classify the new samples into their respective families, often outperforming commercial antivirus software. However, abstractions have limitations by virtue of being approximations. We show that by increasing the granularity of the abstractions used to produce more fine-grained features, we can improve the accuracy of the results as in our second tool, StranDroid, which generates fewer false positives on the same datasets. The source code of malware samples is not often available or easily retrievable. For this reason, we introduce a second scenario in which the classification must be carried out with only the compiled binaries of malware samples on hand. Program similarity in this context cannot be done using semantic abstractions as before, since it is difficult to create meaningful abstractions from zeros and ones. Instead, by treating the compiled programs as raw data, we transform them into images and build upon common image classification algorithms using machine learning. This led us to develop novel deep learning models, a convolutional neural network and a long short-term memory, to classify the samples into their respective families. To overcome the usual obstacle of deep learning of lacking sufficiently large and balanced datasets, we utilize obfuscations as a data augmentation tool to generate semantically equivalent variants of existing samples and expand the dataset as needed. Finally, to lower the computational cost of the training process, we use transfer learning and show that a model trained on one dataset can be used to successfully classify samples in different malware datasets. The third scenario explored in this thesis assumes that even the binary itself cannot be accessed for analysis, but it can be executed, and the execution traces can then be used to extract semantic properties. However, dynamic analysis lacks the formal tools and frameworks that exist in static analysis to allow proving the effectiveness of obfuscations. For this reason, the focus shifts to building a novel formal framework that is able to assess the potency of obfuscations against dynamic analysis. We validate the new framework by using it to encode known analyses and obfuscations, and show how these obfuscations actually hinder the dynamic analysis process

    Programmable stochastic processors

    Get PDF
    As traditional approaches for reducing power in microprocessors are being exhausted, extreme power challenges call for unconventional approaches to power reduction. Recent research has shown substantial promise for application-specific stochastic computing, i.e., computing that exploits application error tolerance to enable careful relaxation of correctness guarantees provided by hardware in order to reduce power. This dissertation explores the feasibility, challenges, and potential benefits of stochastic computing in the context of programmable general purpose processors. Specifically, the dissertation describes design-level techniques that minimize the power of a processor for a non-zero error rate or allow a processor to fail gracefully when operated over a range of non-zero error rates. It presents microarchitectural design principles that allow a processor to trade off reliability and energy more efficiently to minimize energy when exploiting error resilience. It demonstrates the benefit of using compiler optimizations that optimize a binary to enable more energy savings when operating at a non-zero error rate. It also demonstrates significant benefits for a programmable stochastic processor prototype that improves energy efficiency by carefully relaxing correctness and exposing errors in applications running on a commodity processor. This dissertation on programmable stochastic processors conclusively shows that the architecture and design of processors and applications should be approached differently in scenarios where errors are allowed to be exposed from the hardware to higher levels of the compute stack. Significant energy benefits are demonstrated for design-, architecture-, compiler-, and application-level optimizations for general purpose programmable stochastic processors

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Decrypting legal dilemmas

    Get PDF
    It has become a truism that the speed of technological progress leaves law and policy scrambling to keep up. But in addition to creating new challenges, technological advances also enable new improvements to issues at the intersection of law and technology. In this thesis, I develop new cryptographic tools for informing and improving our law and policy, including specific technical innovations and analysis of the limits of possible interventions. First, I present a cryptographic analysis of a legal question concerning the limits of the Fifth Amendment: can courts legally compel people to decrypt their devices? Our cryptographic analysis is useful not only for answering this specific question about encrypted devices, but also for analyzing questions about the wider legal doctrine. The second part of this thesis turns to algorithmic fairness. With the rise of automated decision-making, greater attention has been paid to statistical notions of fairness and equity. In this part of the work, I demonstrate technical limits of those notions and examine a relaxation of those notions; these analyses should inform legal or policy interventions. Finally, the third section of this thesis describes several methods for improving zero-knowledge proofs of knowledge, which allow a prover to convince a verifier of some property without revealing anything beyond the fact of the prover's knowledge. The methods in this work yield a concrete proof size reduction of two plausibly post-quantum styles of proof with transparent setup that can be made non-interactive via the Fiat-Shamir transform: "MPC-in-the-head," which is a linear-size proof that is fast, low-memory, and has few assumptions, and "Ligero," a sublinear-size proof achieving a balance between proof size and prover runtime. We will describe areas where zero-knowledge proofs in general can provide new, currently-untapped functionalities for resolving legal disputes, proving adherence to a policy, executing contracts, and enabling the sale of information without giving it away

    Knowledge Representation in Engineering 4.0

    Get PDF
    This dissertation was developed in the context of the BMBF and EU/ECSEL funded projects GENIAL! and Arrowhead Tools. In these projects the chair examines methods of specifications and cooperations in the automotive value chain from OEM-Tier1-Tier2. Goal of the projects is to improve communication and collaborative planning, especially in early development stages. Besides SysML, the use of agreed vocabularies and on- tologies for modeling requirements, overall context, variants, and many other items, is targeted. This thesis proposes a web database, where data from the collaborative requirements elicitation is combined with an ontology-based approach that uses reasoning capabilities. For this purpose, state-of-the-art ontologies have been investigated and integrated that entail domains like hardware/software, roadmapping, IoT, context, innovation and oth- ers. New ontologies have been designed like a HW / SW allocation ontology and a domain-specific "eFuse ontology" as well as some prototypes. The result is a modular ontology suite and the GENIAL! Basic Ontology that allows us to model automotive and microelectronic functions, components, properties and dependencies based on the ISO26262 standard among these elements. Furthermore, context knowledge that influences design decisions such as future trends in legislation, society, environment, etc. is included. These knowledge bases are integrated in a novel tool that allows for collabo- rative innovation planning and requirements communication along the automotive value chain. To start off the work of the project, an architecture and prototype tool was developed. Designing ontologies and knowing how to use them proved to be a non-trivial task, requiring a lot of context and background knowledge. Some of this background knowledge has been selected for presentation and was utilized either in designing models or for later immersion. Examples are basic foundations like design guidelines for ontologies, ontology categories and a continuum of expressiveness of languages and advanced content like multi-level theory, foundational ontologies and reasoning. Finally, at the end, we demonstrate the overall framework, and show the ontology with reasoning, database and APPEL/SysMD (AGILA ProPErty and Dependency Descrip- tion Language / System MarkDown) and constraints of the hardware / software knowledge base. There, by example, we explore and solve roadmap constraints that are coupled with a car model through a constraint solver.Diese Dissertation wurde im Kontext des von BMBF und EU / ECSEL gefördertem Projektes GENIAL! und Arrowhead Tools entwickelt. In diesen Projekten untersucht der Lehrstuhl Methoden zur Spezifikationen und Kooperation in der Automotive Wertschöp- fungskette, von OEM zu Tier1 und Tier2. Ziel der Arbeit ist es die Kommunikation und gemeinsame Planung, speziell in den frühen Entwicklungsphasen zu verbessern. Neben SysML ist die Benutzung von vereinbarten Vokabularen und Ontologien in der Modellierung von Requirements, des Gesamtkontextes, Varianten und vielen anderen Elementen angezielt. Ontologien sind dabei eine Möglichkeit, um das Vermeiden von Missverständnissen und Fehlplanungen zu unterstützen. Dieser Ansatz schlägt eine Web- datenbank vor, wobei Ontologien das Teilen von Wissen und das logische Schlussfolgern von implizitem Wissen und Regeln unterstützen. Diese Arbeit beschreibt Ontologien für die Domäne des Engineering 4.0, oder spezifischer, für die Domäne, die für das deutsche Projekt GENIAL! benötigt wurde. Dies betrifft Domänen, wie Hardware und Software, Roadmapping, Kontext, Innovation, IoT und andere. Neue Ontologien wurden entworfen, wie beispielsweise die Hardware-Software Allokations-Ontologie und eine domänen-spezifische "eFuse Ontologie". Das Ergebnis war eine modulare Ontologie-Bibliothek mit der GENIAL! Basic Ontology, die es erlaubt, automotive und mikroelektronische Komponenten, Funktionen, Eigenschaften und deren Abhängigkeiten basierend auf dem ISO26262 Standard zu entwerfen. Des weiteren ist Kontextwissen, welches Entwurfsentscheidungen beinflusst, inkludiert. Diese Wissensbasen sind in einem neuartigen Tool integriert, dass es ermöglicht, Roadmapwissen und Anforderungen durch die Automobil- Wertschöpfungskette hinweg auszutauschen. On tologien zu entwerfen und zu wissen, wie man diese benutzt, war dabei keine triviale Aufgabe und benötigte viel Hintergrund- und Kontextwissen. Ausgewählte Grundlagen hierfür sind Richtlinien, wie man Ontologien entwirft, Ontologiekategorien, sowie das Spektrum an Sprachen und Formen von Wissensrepresentationen. Des weiteren sind fort- geschrittene Methoden erläutert, z.B wie man mit Ontologien Schlußfolgerungen trifft. Am Schluss wird das Overall Framework demonstriert, und die Ontologie mit Reason- ing, Datenbank und APPEL/SysMD (AGILA ProPErty and Dependency Description Language / System MarkDown) und Constraints der Hardware / Software Wissensbasis gezeigt. Dabei werden exemplarisch Roadmap Constraints mit dem Automodell verbunden und durch den Constraint Solver gelöst und exploriert

    Designated Verifier/Prover and Preprocessing NIZKs from Diffie-Hellman Assumptions

    Get PDF
    In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. Thus far, numerous constructions of NIZKs have been provided in the common reference string (CRS) model (CRS-NIZK) from various assumptions, however, it still remains a long standing open problem to construct them from tools such as pairing-free groups or lattices. Recently, Kim and Wu (CRYPTO\u2718) made great progress regarding this problem and constructed the first lattice-based NIZK in a relaxed model called NIZKs in the preprocessing model (PP-NIZKs). In this model, there is a trusted statement-independent preprocessing phase where secret information are generated for the prover and verifier. Depending on whether those secret information can be made public, PP-NIZK captures CRS-NIZK, designated-verifier NIZK (DV-NIZK), and designated-prover NIZK (DP-NIZK) as special cases. It was left as an open problem by Kim and Wu whether we can construct such NIZKs from weak paring-free group assumptions such as DDH. As a further matter, all constructions of NIZKs from Diffie-Hellman (DH) type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead ∣C∣⋅poly(κ)|C| \cdot \mathsf{poly}(\kappa), where ∣C∣|C| is the size of the circuit that computes the NP\mathbf{NP} relation. In this work, we make progress of constructing (DV, DP, PP)-NIZKs with varying flavors from DH-type assumptions. Our results are summarized as follows: 1. DV-NIZKs for NP\mathbf{NP} from the CDH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO\u2718). 2. DP-NIZKs for NP\mathbf{NP} with short proof size from a DH-type assumption over pairing groups. Here, the proof size has an additive-overhead ∣C∣+poly(κ)|C|+\mathsf{poly}(\kappa) rather then an multiplicative-overhead ∣C∣⋅poly(κ)|C| \cdot \mathsf{poly}(\kappa). This is the first construction of such NIZKs (including CRS-NIZKs) that does not rely on the LWE assumption, fully-homomorphic encryption, indistinguishability obfuscation, or non-falsifiable assumptions. 3. PP-NIZK for NP\mathbf{NP} with short proof size from the DDH assumption over pairing-free groups. This is the first PP-NIZK that achieves a short proof size from a weak and static DH-type assumption such as DDH. Similarly to the above DP-NIZK, the proof size is ∣C∣+poly(κ)|C|+\mathsf{poly}(\kappa). This too serves as a solution to the open problem posed by Kim and Wu (CRYPTO\u2718). Along the way, we construct two new homomorphic authentication (HomAuth) schemes which may be of independent interest
    corecore