99 research outputs found

    An Efficient and Secure Key Management Scheme for Hierarchical Access Control Based on ECC

    Get PDF
    In a key management scheme for hierarchy based access control, each security class having higher clearance can derive the cryptographic secret keys of its other security classes having lower clearances. In 2006 Jeng-Wang proposed an efficient scheme on access control in user hierarchy based on elliptic curve cryptosystem. Their scheme provides solution of key management efficiently for dynamic access problems. However, in this paper, we propose an attack on Jeng-Wang scheme to show that Jeng-Wang scheme is insecure against our proposed attack. We show that in our proposed attack, an attacker (adversary) who is not a user in any security class in a user hierarchy attempts to derive the secret key of a security class

    Key Indistinguishability vs. Strong Key Indistinguishability for Hierarchical Key Assignment Schemes

    Get PDF
    A hierarchical key assignment scheme is a method to assign some private information and encryption keys to a set of classes in a partially ordered hierarchy, in such a way that the private information of a higher class can be used to derive the keys of all classes lower down in the hierarchy. In this paper we analyze the security of hierarchical key assignment schemes according to different notions: security with respect to key indistinguishability and against key recovery, as well as the two recently proposed notions of security with respect to strong key indistinguishability and against strong key recovery. We first explore the relations between all security notions and, in particular, we prove that security with respect to strong key indistinguishability is not stronger than the one with respect to key indistinguishability. Afterwards, we propose a general construction yielding a hierarchical key assignment scheme offering security against strong key recovery, given any hierarchical key assignment scheme which guarantees security against key recovery

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    An architecture for secure data management in medical research and aided diagnosis

    Get PDF
    Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] O Regulamento Xeral de Proteccion de Datos (GDPR) implantouse o 25 de maio de 2018 e considerase o desenvolvemento mais importante na regulacion da privacidade de datos dos ultimos 20 anos. As multas fortes definense por violar esas regras e non e algo que os centros sanitarios poidan permitirse ignorar. O obxectivo principal desta tese e estudar e proponer unha capa segura/integracion para os curadores de datos sanitarios, onde: a conectividade entre sistemas illados (localizacions), a unificacion de rexistros nunha vision centrada no paciente e a comparticion de datos coa aprobacion do consentimento sexan as pedras angulares de a arquitectura controlar a sua identidade, os perfis de privacidade e as subvencions de acceso. Ten como obxectivo minimizar o medo a responsabilidade legal ao compartir os rexistros medicos mediante o uso da anonimizacion e facendo que os pacientes sexan responsables de protexer os seus propios rexistros medicos, pero preservando a calidade do tratamento do paciente. A nosa hipotese principal e: os conceptos Distributed Ledger e Self-Sovereign Identity son unha simbiose natural para resolver os retos do GDPR no contexto da saude? Requirense solucions para que os medicos e investigadores poidan manter os seus fluxos de traballo de colaboracion sen comprometer as regulacions. A arquitectura proposta logra eses obxectivos nun ambiente descentralizado adoptando perfis de privacidade de datos illados.[Resumen] El Reglamento General de Proteccion de Datos (GDPR) se implemento el 25 de mayo de 2018 y se considera el desarrollo mas importante en la regulacion de privacidad de datos en los ultimos 20 anos. Las fuertes multas estan definidas por violar esas reglas y no es algo que los centros de salud puedan darse el lujo de ignorar. El objetivo principal de esta tesis es estudiar y proponer una capa segura/de integración para curadores de datos de atencion medica, donde: la conectividad entre sistemas aislados (ubicaciones), la unificacion de registros en una vista centrada en el paciente y el intercambio de datos con la aprobacion del consentimiento son los pilares de la arquitectura propuesta. Esta propuesta otorga al titular de los datos un rol central, que le permite controlar su identidad, perfiles de privacidad y permisos de acceso. Su objetivo es minimizar el temor a la responsabilidad legal al compartir registros medicos utilizando el anonimato y haciendo que los pacientes sean responsables de proteger sus propios registros medicos, preservando al mismo tiempo la calidad del tratamiento del paciente. Nuestra hipotesis principal es: .son los conceptos de libro mayor distribuido e identidad autosuficiente una simbiosis natural para resolver los desafios del RGPD en el contexto de la atencion medica? Se requieren soluciones para que los medicos y los investigadores puedan mantener sus flujos de trabajo de colaboracion sin comprometer las regulaciones. La arquitectura propuesta logra esos objetivos en un entorno descentralizado mediante la adopcion de perfiles de privacidad de datos aislados.[Abstract] The General Data Protection Regulation (GDPR) was implemented on 25 May 2018 and is considered the most important development in data privacy regulation in the last 20 years. Heavy fines are defined for violating those rules and is not something that healthcare centers can afford to ignore. The main goal of this thesis is to study and propose a secure/integration layer for healthcare data curators, where: connectivity between isolated systems (locations), unification of records in a patientcentric view and data sharing with consent approval are the cornerstones of the proposed architecture. This proposal empowers the data subject with a central role, which allows to control their identity, privacy profiles and access grants. It aims to minimize the fear of legal liability when sharing medical records by using anonymisation and making patients responsible for securing their own medical records, yet preserving the patient’s quality of treatment. Our main hypothesis is: are the Distributed Ledger and Self-Sovereign Identity concepts a natural symbiosis to solve the GDPR challenges in the context of healthcare? Solutions are required so that clinicians and researchers can maintain their collaboration workflows without compromising regulations. The proposed architecture accomplishes those objectives in a decentralized environment by adopting isolated data privacy profiles

    Provably-Secure Time-Bound Hierarchical Key Assignment Schemes

    Get PDF
    A time-bound hierarchical key assignment scheme is a method to assign time-dependent encryption keys to a set of classes in a partially ordered hierarchy, in such a way that each class can compute the keys of all classes lower down in the hierarchy, according to temporal constraints. In this paper we design and analyze time-bound hierarchical key assignment schemes which are provably-secure and efficient. We consider both the unconditionally secure and the computationally secure settings and distinguish between two different goals: security with respect to key indistinguishability and against key recovery. We first present definitions of security with respect to both goals in the unconditionally secure setting and we show tight lower bounds on the size of the private information distributed to each class. Then, we consider the computational setting and we further distinguish security against static and adaptive adversarial behaviors. We explore the relations between all possible combinations of security goals and adversarial behaviors and, in particular, we prove that security against adaptive adversaries is (polynomially) equivalent to security against static adversaries. Afterwards, we prove that a recently proposed scheme is insecure against key recovery. Finally, we propose two different constructions for time-bound key assignment schemes. The first one is based on symmetric encryption schemes, whereas, the second one makes use of bilinear maps. Both constructions support updates to the access hierarchy with local changes to the public information and without requiring any private information to be re-distributed. These appear to be the first constructions for time-bound hierarchical key assignment schemes which are simultaneously practical and provably-secure

    Lattice-based zero-knowledge proofs of knowledge

    Get PDF
    (English) The main goal of this dissertation is to develop new lattice-based cryptographic schemes. Most of the cryptographic protocols that each and every one of us use on a daily basis are only secure under the assumption that two mathematical problems, namely the discrete logarithm on elliptic curves and the factorization of products of two primes, are computationally hard. That is believed to be true for classical computers, but quantum computers would be able to solve these problems much more efficiently, demolishing the foundations of plenty of cryptographic constructions. This reveals the importance of post-quantum alternatives, cryptographic schemes whose security relies on different problems intractable for both classical and quantum computers. The most promising family of problems widely believed to be hard for quantum computers are lattice-based problems. We increase the supply of lattice-based tools providing new Zero-Knowledge Proofs of Knowledge for the Ring Learning With Errors (RLWE) problem, perhaps the most popular lattice-based problem. Zero-knowledge proofs are protocols between a prover and a verifier where the prover convinces the verifier of the validity of certain statements without revealing any additional relevant information. Our proofs extend the literature of Stern-based proofs, following the techniques presented by Jacques Stern in 1994. His original idea involved a code-based problem, but it has been reiteratedly improved and generalized to be used with lattices. We illustrate our proposal defining a variant of the commitment scheme, a cryptographic primitive that allows us to ensure some message was already determined at some point without revealing it until a future time, defined by Benhamouda et al. in ESORICS 2015, and proving in zero-knowledge the knowledge of a valid opening. Most importantly we also show how to prove that the message committed in one commitment is a linear combination, with some public coefficients, of the committed messages from two other commitments, again without revealing any further information about the messages. Finally, we also present a zero-knowledge proof analogous to the previous one but for multiplicative relations, something much more involved that allows us to prove any arithmetic circuit. We give first an interactive version of these proofs and then show how to construct a non-interactive one. We diligently prove that both the commitment and the companion Zero-Knowledge Proofs of Knowledge are secure under the assumption of the hardness of the underlying lattice problems. Furthermore, we specifically develop such proofs so that the arising conditions can be directly used to compute parameters that satisfy them. This way we provide a general method to instantiate our commitment and proofs with any desired security level. Thanks to this practical approach we have been able to implement all the proposed schemes and benchmark the prototype im-plementation with actually secure parameters, which allows us to obtain meaningful results and compare its performance with the existing alternatives. Moreover, provided that multiplication of polynomials in the quotient ring ℤₚ[]/⟨ⁿ + 1⟩, with prime and a power of two, is the most basic operation when working with ideal lattices we comprehensively study what are the necessary and sufficient conditions needed for applying (a generalized version of) the Fast Fourier Transform (FFT) to obtain an efficient multiplication algorithm in quotient rings as ℤₘ[]/⟨ⁿ − ⟩ (where we consider any positive integer and generalize the quotient), as we think it is of independent interest. We believe such a theoretical analysis is fundamental to be able to determine when a given generalization can also be applied to design an efficient multiplication algorithm when the FFT is not defined for the ring we are considering. That is the case of the rings used for the commitment and proofs described before, where only a partial FFT is available.(Español) El objetivo principal de esta tesis es obtener nuevos esquemas criptográficos basados en retículos. La mayoría de los protocolos criptográficos que usamos a diario son únicamente seguros bajo la hipótesis de que el problema del logaritmo discreto en curvas elípticas y la factorización de productos de dos primos son computacionalmente difíciles. Se cree que esto es cierto para los ordenadores clásicos, pero los ordenadores cuánticos podrían resolver estos problemas de forma mucho más eficiente, acabando con las bases sobre las que se fundamenta una multitud de construcciones criptográficas. Esto evidencia la importancia de las alternativas poscuánticas, cuya seguridad se basa en problemas diferentes que sean inasumibles tanto para los ordenadores clásicos como los cuánticos. Los problemas de retículos son los candidatos más prometedores, puesto que se considera que son problemas difíciles para los ordenadores cuánticos. Presentamos nuevas herramientas basadas en retículos con unas Pruebas de Conocimiento Nulo para el problema Ring Learning With Errors (RLWE), seguramente el problema de retículos más popular. Las pruebas de Conocimiento Nulo son protocolos entre un probador y un verificador en los que el primero convence al segundo de la validez de una proposición, sin revelar ninguna información adicional relevante. Nuestras pruebas se basan en el protocolo de Stern, siguiendo sus técnicas presentadas en 1994. Su idea original involucraba un problema de códigos, pero se ha mejorado y generalizado reiteradamente para poder aplicarse a retículos. Ilustramos nuestra propuesta definiendo una variante del esquema de compromiso, una primitiva criptográfica que nos permite asegurar que un mensaje fue determinado en cierto momento sin revelarlo hasta pasado un tiempo, definido por Benhamouda et al. en ESORICS 2015, y probando que conocemos una apertura válida. Además mostramos cómo probar que el mensaje comprometido es una combinación lineal, con coeficientes públicos, de los mensajes comprometidos en otros dos compromisos. Finalmente también presentamos una prueba de Conocimiento Nulo análoga a la anterior pero para relaciones multiplicativas, algo mucho más laborioso que nos permite realizar circuitos aritméticos. Todo esto sin revelar ninguna información adicional sobre los mensajes. Mostramos tanto una versión interactiva como una no interactiva. Probamos que tanto el compromiso como las pruebas de Conocimiento Nulo que le acompañan son seguras bajo la hipótesis de que el problema de retículos subyacente sea difícil. Además planteamos estas pruebas específicamente con el objetivo de que las condiciones que surjan puedan ser utilizadas directamente para calcular los parámetros que las satisfagan. De esta forma proporcionamos un método genérico para instanciar nuestro compromiso y pruebas con cualquier nivel de seguridad. Gracias a este enfoque práctico hemos podido implementar todos los esquemas propuestos y evaluar el rendimiento con parámetros seguros, lo que nos permite obtener resultados relevantes que poder comparar con las alternativas existentes. Por otra parte, dado que la multiplicación de polinomios en el anillo cociente ℤₚ[]/⟨ⁿ + 1⟩, con primo y una potencia de 2, es la operación más utilizada al trabajar con retículos ideales, estudiamos de forma exhaustiva cuáles son las condiciones suficientes y necesarias para aplicar (una versión generalizada de) la Transformada Rápida de Fourier (FFT, por sus siglas en inglés) para obtener algoritmos de multiplicación eficientes en anillos cociente ℤₘ[]/⟨ⁿ − ⟩, (considerando cualquier positiva y generalizando el cociente), de interés por sí mismo. Creemos que este análisis teórico es fundamental para determinar cuándo puede diseñarse un algoritmo eficiente de multiplicación si la FFT no está definida para el anillo considerado. Es el caso de los anillos que utilizamos en el compromiso y las pruebas descritas anteriormente, donde solo es posible calcular una FFT parcial.DOCTORAT EN MATEMÀTICA APLICADA (Pla 2012

    An efficient framework for privacy-preserving computations on encrypted IoT data

    Get PDF
    There are two fundamental expectations from Cloud-IoT applications using sensitive and personal data: data utility and user privacy. With the complex nature of cloud-IoT ecosystem, there is a growing concern about data utility at the cost of privacy. While the current state-of-the-art encryption schemes protect users’ privacy, they preclude meaningful computations on encrypted data. Thus, the question remains “how to help IoT device users benefit from cloud computing without compromising data confidentiality and user privacy”? Cloud service providers (CSP) can leverage Fully homomorphic encryption (FHE) schemes to deliver privacy-preserving services. However, there are limitations in directly adopting FHE-based solutions for real-world Cloud-IoT applications. Thus, to foster real-world adoption of FHE-based solutions, we propose a framework called Proxy re-ciphering as a service. It leverages existing schemes such as distributed proxy servers, threshold secret sharing, chameleon hash function and FHE to tailor a practical solution that enables long-term privacy-preserving cloud computations for IoT ecosystem. We also encourage CSPs to store minimal yet adequate information from processing the raw IoT device data. Furthermore, we explore a way for IoT devices to refresh their device keys after a key-compromise. To evaluate the framework, we first develop a testbed and measure the latencies with real-world ECG records from TELE ECG Database. We observe that i) although the distributed framework introduces computation and communication latencies, the security gains outweighs the latencies, ii) the throughput of the servers providing re-ciphering service can be greatly increased with pre-processing iii) with a key refresh scheme we can limit the upper bound on the attack window post a key-compromise. Finally, we analyze the security properties against major threats faced by Cloud-IoT ecosystem. We infer that Proxy re-ciphering as a service is a practical, secure, scalable and an easy-to-adopt framework for long-term privacy-preserving cloud computations for encrypted IoT data

    Self-Protecting Access Control: On Mitigating Privacy Violations with Fault Tolerance

    Get PDF
    Self-protecting access control mechanisms can be described as an approach to enforcing security in a manner that automatically protects against violations of access control rules. In this chapter, we present a comparative analysis of standard Cryptographic Access Control (CAC) schemes in relation to privacy enforcement on the Web. We postulate that to mitigate privacy violations, self-protecting CAC mechanisms need to be supported by fault-tolerance. As an example of how one might to do this, we present two solutions that are inspired by the autonomic computing paradigm1. Our solutions are centered on how CAC schemes can be extended to protect against privacy violations that might arise from key updates and collusion attacks
    corecore