2,763 research outputs found

    Flow Ambiguity: A Path Towards Classically Driven Blind Quantum Computation

    Get PDF
    Blind quantum computation protocols allow a user to delegate a computation to a remote quantum computer in such a way that the privacy of their computation is preserved, even from the device implementing the computation. To date, such protocols are only known for settings involving at least two quantum devices: either a user with some quantum capabilities and a remote quantum server or two or more entangled but noncommunicating servers. In this work, we take the first step towards the construction of a blind quantum computing protocol with a completely classical client and single quantum server. Specifically, we show how a classical client can exploit the ambiguity in the flow of information in measurement-based quantum computing to construct a protocol for hiding critical aspects of a computation delegated to a remote quantum computer. This ambiguity arises due to the fact that, for a fixed graph, there exist multiple choices of the input and output vertex sets that result in deterministic measurement patterns consistent with the same fixed total ordering of vertices. This allows a classical user, computing only measurement angles, to drive a measurement-based computation performed on a remote device while hiding critical aspects of the computation.Comment: (v3) 14 pages, 6 figures. expands introduction and definition of flow, corrects typos to increase readability; contains a new figure to illustrate example run of CDBQC protocol; minor changes to match the published version.(v2) 12 pages, 5 figures. Corrects motivation for quantities used in blindness analysi

    ARPA Whitepaper

    Get PDF
    We propose a secure computation solution for blockchain networks. The correctness of computation is verifiable even under malicious majority condition using information-theoretic Message Authentication Code (MAC), and the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty computation protocol and a layer2 solution, our privacy-preserving computation guarantees data security on blockchain, cryptographically, while reducing the heavy-lifting computation job to a few nodes. This breakthrough has several implications on the future of decentralized networks. First, secure computation can be used to support Private Smart Contracts, where consensus is reached without exposing the information in the public contract. Second, it enables data to be shared and used in trustless network, without disclosing the raw data during data-at-use, where data ownership and data usage is safely separated. Last but not least, computation and verification processes are separated, which can be perceived as computational sharding, this effectively makes the transaction processing speed linear to the number of participating nodes. Our objective is to deploy our secure computation network as an layer2 solution to any blockchain system. Smart Contracts\cite{smartcontract} will be used as bridge to link the blockchain and computation networks. Additionally, they will be used as verifier to ensure that outsourced computation is completed correctly. In order to achieve this, we first develop a general MPC network with advanced features, such as: 1) Secure Computation, 2) Off-chain Computation, 3) Verifiable Computation, and 4)Support dApps' needs like privacy-preserving data exchange

    Verified Correctness and Security of mbedTLS HMAC-DRBG

    Full text link
    We have formalized the functional specification of HMAC-DRBG (NIST 800-90A), and we have proved its cryptographic security--that its output is pseudorandom--using a hybrid game-based proof. We have also proved that the mbedTLS implementation (C program) correctly implements this functional specification. That proof composes with an existing C compiler correctness proof to guarantee, end-to-end, that the machine language program gives strong pseudorandomness. All proofs (hybrid games, C program verification, compiler, and their composition) are machine-checked in the Coq proof assistant. Our proofs are modular: the hybrid game proof holds on any implementation of HMAC-DRBG that satisfies our functional specification. Therefore, our functional specification can serve as a high-assurance reference.Comment: Appearing in CCS '1

    A Tutorial on Clique Problems in Communications and Signal Processing

    Full text link
    Since its first use by Euler on the problem of the seven bridges of K\"onigsberg, graph theory has shown excellent abilities in solving and unveiling the properties of multiple discrete optimization problems. The study of the structure of some integer programs reveals equivalence with graph theory problems making a large body of the literature readily available for solving and characterizing the complexity of these problems. This tutorial presents a framework for utilizing a particular graph theory problem, known as the clique problem, for solving communications and signal processing problems. In particular, the paper aims to illustrate the structural properties of integer programs that can be formulated as clique problems through multiple examples in communications and signal processing. To that end, the first part of the tutorial provides various optimal and heuristic solutions for the maximum clique, maximum weight clique, and kk-clique problems. The tutorial, further, illustrates the use of the clique formulation through numerous contemporary examples in communications and signal processing, mainly in maximum access for non-orthogonal multiple access networks, throughput maximization using index and instantly decodable network coding, collision-free radio frequency identification networks, and resource allocation in cloud-radio access networks. Finally, the tutorial sheds light on the recent advances of such applications, and provides technical insights on ways of dealing with mixed discrete-continuous optimization problems

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    An architecture for secure data management in medical research and aided diagnosis

    Get PDF
    Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] O Regulamento Xeral de Proteccion de Datos (GDPR) implantouse o 25 de maio de 2018 e considerase o desenvolvemento mais importante na regulacion da privacidade de datos dos ultimos 20 anos. As multas fortes definense por violar esas regras e non e algo que os centros sanitarios poidan permitirse ignorar. O obxectivo principal desta tese e estudar e proponer unha capa segura/integracion para os curadores de datos sanitarios, onde: a conectividade entre sistemas illados (localizacions), a unificacion de rexistros nunha vision centrada no paciente e a comparticion de datos coa aprobacion do consentimento sexan as pedras angulares de a arquitectura controlar a sua identidade, os perfis de privacidade e as subvencions de acceso. Ten como obxectivo minimizar o medo a responsabilidade legal ao compartir os rexistros medicos mediante o uso da anonimizacion e facendo que os pacientes sexan responsables de protexer os seus propios rexistros medicos, pero preservando a calidade do tratamento do paciente. A nosa hipotese principal e: os conceptos Distributed Ledger e Self-Sovereign Identity son unha simbiose natural para resolver os retos do GDPR no contexto da saude? Requirense solucions para que os medicos e investigadores poidan manter os seus fluxos de traballo de colaboracion sen comprometer as regulacions. A arquitectura proposta logra eses obxectivos nun ambiente descentralizado adoptando perfis de privacidade de datos illados.[Resumen] El Reglamento General de Proteccion de Datos (GDPR) se implemento el 25 de mayo de 2018 y se considera el desarrollo mas importante en la regulacion de privacidad de datos en los ultimos 20 anos. Las fuertes multas estan definidas por violar esas reglas y no es algo que los centros de salud puedan darse el lujo de ignorar. El objetivo principal de esta tesis es estudiar y proponer una capa segura/de integración para curadores de datos de atencion medica, donde: la conectividad entre sistemas aislados (ubicaciones), la unificacion de registros en una vista centrada en el paciente y el intercambio de datos con la aprobacion del consentimiento son los pilares de la arquitectura propuesta. Esta propuesta otorga al titular de los datos un rol central, que le permite controlar su identidad, perfiles de privacidad y permisos de acceso. Su objetivo es minimizar el temor a la responsabilidad legal al compartir registros medicos utilizando el anonimato y haciendo que los pacientes sean responsables de proteger sus propios registros medicos, preservando al mismo tiempo la calidad del tratamiento del paciente. Nuestra hipotesis principal es: .son los conceptos de libro mayor distribuido e identidad autosuficiente una simbiosis natural para resolver los desafios del RGPD en el contexto de la atencion medica? Se requieren soluciones para que los medicos y los investigadores puedan mantener sus flujos de trabajo de colaboracion sin comprometer las regulaciones. La arquitectura propuesta logra esos objetivos en un entorno descentralizado mediante la adopcion de perfiles de privacidad de datos aislados.[Abstract] The General Data Protection Regulation (GDPR) was implemented on 25 May 2018 and is considered the most important development in data privacy regulation in the last 20 years. Heavy fines are defined for violating those rules and is not something that healthcare centers can afford to ignore. The main goal of this thesis is to study and propose a secure/integration layer for healthcare data curators, where: connectivity between isolated systems (locations), unification of records in a patientcentric view and data sharing with consent approval are the cornerstones of the proposed architecture. This proposal empowers the data subject with a central role, which allows to control their identity, privacy profiles and access grants. It aims to minimize the fear of legal liability when sharing medical records by using anonymisation and making patients responsible for securing their own medical records, yet preserving the patient’s quality of treatment. Our main hypothesis is: are the Distributed Ledger and Self-Sovereign Identity concepts a natural symbiosis to solve the GDPR challenges in the context of healthcare? Solutions are required so that clinicians and researchers can maintain their collaboration workflows without compromising regulations. The proposed architecture accomplishes those objectives in a decentralized environment by adopting isolated data privacy profiles

    Protection of big data privacy

    Full text link
    In recent years, big data have become a hot research topic. The increasing amount of big data also increases the chance of breaching the privacy of individuals. Since big data require high computational power and large storage, distributed systems are used. As multiple parties are involved in these systems, the risk of privacy violation is increased. There have been a number of privacy-preserving mechanisms developed for privacy protection at different stages (e.g., data generation, data storage, and data processing) of a big data life cycle. The goal of this paper is to provide a comprehensive overview of the privacy preservation mechanisms in big data and present the challenges for existing mechanisms. In particular, in this paper, we illustrate the infrastructure of big data and the state-of-the-art privacy-preserving mechanisms in each stage of the big data life cycle. Furthermore, we discuss the challenges and future research directions related to privacy preservation in big data
    corecore