2,435 research outputs found

    Predictable arguments of knowledge

    Get PDF
    We initiate a formal investigation on the power of predictability for argument of knowledge systems for NP. Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier; we call such protocols Predictable Arguments of Knowledge (PAoK). Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e., two messages) of communication without loss of generality. We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructions of PAoK relying on different cryptographic tools, and discuss applications to cryptography

    A Virtual Conversational Agent for Teens with Autism: Experimental Results and Design Lessons

    Full text link
    We present the design of an online social skills development interface for teenagers with autism spectrum disorder (ASD). The interface is intended to enable private conversation practice anywhere, anytime using a web-browser. Users converse informally with a virtual agent, receiving feedback on nonverbal cues in real-time, and summary feedback. The prototype was developed in consultation with an expert UX designer, two psychologists, and a pediatrician. Using the data from 47 individuals, feedback and dialogue generation were automated using a hidden Markov model and a schema-driven dialogue manager capable of handling multi-topic conversations. We conducted a study with nine high-functioning ASD teenagers. Through a thematic analysis of post-experiment interviews, identified several key design considerations, notably: 1) Users should be fully briefed at the outset about the purpose and limitations of the system, to avoid unrealistic expectations. 2) An interface should incorporate positive acknowledgment of behavior change. 3) Realistic appearance of a virtual agent and responsiveness are important in engaging users. 4) Conversation personalization, for instance in prompting laconic users for more input and reciprocal questions, would help the teenagers engage for longer terms and increase the system's utility

    Towards compact bandwidth and efficient privacy-preserving computation

    Get PDF
    In traditional cryptographic applications, cryptographic mechanisms are employed to ensure the security and integrity of communication or storage. In these scenarios, the primary threat is usually an external adversary trying to intercept or tamper with the communication between two parties. On the other hand, in the context of privacy-preserving computation or secure computation, the cryptographic techniques are developed with a different goal in mind: to protect the privacy of the participants involved in a computation from each other. Specifically, privacy-preserving computation allows multiple parties to jointly compute a function without revealing their inputs and it has numerous applications in various fields, including finance, healthcare, and data analysis. It allows for collaboration and data sharing without compromising the privacy of sensitive data, which is becoming increasingly important in today's digital age. While privacy-preserving computation has gained significant attention in recent times due to its strong security and numerous potential applications, its efficiency remains its Achilles' heel. Privacy-preserving protocols require significantly higher computational overhead and bandwidth when compared to baseline (i.e., insecure) protocols. Therefore, finding ways to minimize the overhead, whether it be in terms of computation or communication, asymptotically or concretely, while maintaining security in a reasonable manner remains an exciting problem to work on. This thesis is centred around enhancing efficiency and reducing the costs of communication and computation for commonly used privacy-preserving primitives, including private set intersection, oblivious transfer, and stealth signatures. Our primary focus is on optimizing the performance of these primitives.Im Gegensatz zu traditionellen kryptografischen Aufgaben, bei denen Kryptografie verwendet wird, um die Sicherheit und IntegritĂ€t von Kommunikation oder Speicherung zu gewĂ€hrleisten und der Gegner typischerweise ein Außenstehender ist, der versucht, die Kommunikation zwischen Sender und EmpfĂ€nger abzuhören, ist die Kryptografie, die in der datenschutzbewahrenden Berechnung (oder sicheren Berechnung) verwendet wird, darauf ausgelegt, die PrivatsphĂ€re der Teilnehmer voreinander zu schĂŒtzen. Insbesondere ermöglicht die datenschutzbewahrende Berechnung es mehreren Parteien, gemeinsam eine Funktion zu berechnen, ohne ihre Eingaben zu offenbaren. Sie findet zahlreiche Anwendungen in verschiedenen Bereichen, einschließlich Finanzen, Gesundheitswesen und Datenanalyse. Sie ermöglicht eine Zusammenarbeit und Datenaustausch, ohne die PrivatsphĂ€re sensibler Daten zu kompromittieren, was in der heutigen digitalen Ära immer wichtiger wird. Obwohl datenschutzbewahrende Berechnung aufgrund ihrer starken Sicherheit und zahlreichen potenziellen Anwendungen in jĂŒngster Zeit erhebliche Aufmerksamkeit erregt hat, bleibt ihre Effizienz ihre Achillesferse. Datenschutzbewahrende Protokolle erfordern deutlich höhere Rechenkosten und Kommunikationsbandbreite im Vergleich zu Baseline-Protokollen (d.h. unsicheren Protokollen). Daher bleibt es eine spannende Aufgabe, Möglichkeiten zu finden, um den Overhead zu minimieren (sei es in Bezug auf Rechen- oder Kommunikationsleistung, asymptotisch oder konkret), wĂ€hrend die Sicherheit auf eine angemessene Weise gewĂ€hrleistet bleibt. Diese Arbeit konzentriert sich auf die Verbesserung der Effizienz und Reduzierung der Kosten fĂŒr Kommunikation und Berechnung fĂŒr gĂ€ngige datenschutzbewahrende Primitiven, einschließlich private Schnittmenge, vergesslicher Transfer und Stealth-Signaturen. Unser Hauptaugenmerk liegt auf der Optimierung der Leistung dieser Primitiven

    Interaction-Preserving Compilers for Secure Computation

    Get PDF
    In this work we consider the following question: What is the cost of security for multi-party protocols? Specifically, given an insecure protocol where parties exchange (in the worst case) ? bits in N rounds, is it possible to design a secure protocol with communication complexity close to ? and N rounds? We systematically study this problem in a variety of settings and we propose solutions based on the intractability of different cryptographic problems. For the case of two parties we design an interaction-preserving compiler where the number of bits exchanged in the secure protocol approaches ? and the number of rounds is exactly N, assuming the hardness of standard problems over lattices. For the more general multi-party case, we obtain the same result assuming either (i) an additional round of interaction or (ii) the existence of extractable witness encryption and succinct non-interactive arguments of knowledge. As a contribution of independent interest, we construct the first multi-key fully homomorphic encryption scheme with message-to-ciphertext ratio (i.e., rate) of 1 - o(1), assuming the hardness of the learning with errors (LWE) problem. We view our work as a support for the claim that, as far as interaction and communication are concerned, one does not need to pay a significant price for security in multi-party protocols

    Laconic Function Evaluation for Turing Machines

    Get PDF
    Laconic function evaluation (LFE) allows Alice to compress a large circuit C\mathbf{C} into a small digest d\mathsf{d}. Given Alice\u27s digest, Bob can encrypt some input xx under d\mathsf{d} in a way that enables Alice to recover C(x)\mathbf{C}(x), without learning anything beyond that. The scheme is said to be laconiclaconic if the size of d\mathsf{d}, the runtime of the encryption algorithm, and the size of the ciphertext are all sublinear in the size of C\mathbf{C}. Until now, all known LFE constructions have ciphertexts whose size depends on the depthdepth of the circuit C\mathbf{C}, akin to the limitation of levelledlevelled homomorphic encryption. In this work we close this gap and present the first LFE scheme (for Turing machines) with asymptotically optimal parameters. Our scheme assumes the existence of indistinguishability obfuscation and somewhere statistically binding hash functions. As further contributions, we show how our scheme enables a wide range of new applications, including two previously unknown constructions: ‱ Non-interactive zero-knowledge (NIZK) proofs with optimal prover complexity. ‱ Witness encryption and attribute-based encryption (ABE) for Turing machines from falsifiable assumptions

    Introducing an Evaluation Method for Taxonomies

    Get PDF
    Background: Taxonomies are crucial for the development of a research field, as they play a major role in structuring a complex body of knowledge and help to classify processes, approaches, and solutions. While there is an increasing interest in taxonomies in the software engineering (SE) research field, we observe that SE taxonomies are rarely evaluated. Aim: To raise awareness and provide operational guidance on how to evaluate a taxonomy, this paper presents a three step evaluation method evaluating its structure, applicability, and purpose. Method: To show the feasibility and applicability of our approach, we provide a running example and additionally illustrate our approach to a practical case study in SE research. Results and Conclusion: Our method with operational guidance enables SE researchers to systematically evaluate and improve the quality of their taxonomies and support reviewers to systematically assess a taxonomy\u27s quality

    Streaming Verification for Graph Problems: Optimal Tradeoffs and Nonlinear Sketches

    Get PDF
    We study graph computations in an enhanced data streaming setting, where a space-bounded client reading the edge stream of a massive graph may delegate some of its work to a cloud service. We seek algorithms that allow the client to verify a purported proof sent by the cloud service that the work done in the cloud is correct. A line of work starting with Chakrabarti et al. (ICALP 2009) has provided such algorithms, which we call schemes, for several statistical and graph-theoretic problems, many of which exhibit a tradeoff between the length of the proof and the space used by the streaming verifier. This work designs new schemes for a number of basic graph problems---including triangle counting, maximum matching, topological sorting, and single-source shortest paths---where past work had either failed to obtain smooth tradeoffs between these two key complexity measures or only obtained suboptimal tradeoffs. Our key innovation is having the verifier compute certain nonlinear sketches of the input stream, leading to either new or improved tradeoffs. In many cases, our schemes in fact provide optimal tradeoffs up to logarithmic factors. Specifically, for most graph problems that we study, it is known that the product of the verifier's space cost vv and the proof length hh must be at least Ω(n2)\Omega(n^2) for nn-vertex graphs. However, matching upper bounds are only known for a handful of settings of hh and vv on the curve h⋅v=Θ~(n2)h \cdot v=\tilde{\Theta}(n^2). For example, for counting triangles and maximum matching, schemes with costs lying on this curve are only known for (h=O~(n2),v=O~(1))(h=\tilde{O}(n^2), v=\tilde{O}(1)), (h=O~(n),v=O~(n))(h=\tilde{O}(n), v=\tilde{O}(n)), and the trivial (h=O~(1),v=O~(n2))(h=\tilde{O}(1), v=\tilde{O}(n^2)). A major message of this work is that by exploiting nonlinear sketches, a significant ``portion'' of costs on the tradeoff curve h⋅v=n2h \cdot v = n^2 can be achieved

    Algorithms for Core Computation in Data Exchange

    Get PDF
    We describe the state of the art in the area of core computation for data exchange. Two main approaches are considered: post-processing core computation, applied to a canonical universal solution constructed by chasing a given schema mapping, and direct core computation, where the mapping is first rewritten in order to create core universal solutions by chasing it

    Towards Understanding Reasoning Complexity in Practice

    Get PDF
    Although the computational complexity of the logic underlying the standard OWL 2 for the Web Ontology Language (OWL) appears discouraging for real applications, several contributions have shown that reasoning with OWL ontologies is feasible in practice. It turns out that reasoning in practice is often far less complex than is suggested by the established theoretical complexity bound, which reïŹ‚ects the worstcase scenario. State-of-the reasoners like FACT++, HERMIT, PELLET and RACER have demonstrated that, even with fairly expressive fragments of OWL 2, acceptable performances can be achieved. However, it is still not well understood why reasoning is feasible in practice and it is rather unclear how to study this problem. In this paper, we suggest ïŹrst steps that in our opinion could lead to a better understanding of practical complexity. We also provide and discuss some initial empirical results with HERMIT on prominent ontologie
    • 

    corecore