12 research outputs found

    New Revocable IBE in Prime-Order Groups: Adaptively Secure, Decryption Key Exposure Resistant, and with Short Public Parameters

    Get PDF
    Revoking corrupted users is a desirable functionality for cryptosystems. Since Boldyreva, Goyal, and Kumar (ACM CCS 2008) proposed a notable result for scalable revocation method in identity-based encryption (IBE), several works have improved either the security or the efficiency of revocable IBE (RIBE). Currently, all existing scalable RIBE schemes that achieve adaptively security against decryption key exposure resistance (DKER) can be categorized into two groups; either with long public parameters or over composite-order bilinear groups. From both practical and theoretical points of views, it would be interesting to construct adaptively secure RIBE scheme with DKER and short public parameters in prime-order bilinear groups. In this paper, we address this goal by using Seo and Emura\u27s technique (PKC 2013), which transforms the Waters IBE to the corresponding RIBE. First, we identify necessary requirements for the input IBE of their transforming technique. Next, we propose a new IBE scheme having several desirable properties; satisfying all the requirements for the Seo-Emura technique, constant-size public parameters, and using prime-order bilinear groups. Finally, by applying the Seo-Emura technique, we obtain the first adaptively secure RIBE scheme with DKER and constant-size public parameters in prime-order bilinear groups. We also discuss some extensions of the proposed RIBE scheme

    Secure identity management in structured peer-to-peer (P2P) networks

    Get PDF
    Structured Peer-to-Peer (P2P) networks were proposed to solve routing problems of big distributed infrastructures. But the research community has been questioning their security for years. Most prior work in security services was focused on secure routing, reputation systems, anonymity, etc. However, the proper management of identities is an important prerequisite to provide most of these security services. The existence of anonymous nodes and the lack of a centralized authority capable of monitoring (and/or punishing) nodes make these systems more vulnerable against selfish or malicious behaviors. Moreover, these improper usages cannot be faced only with data confidentiality, nodes authentication, non-repudiation, etc. In particular, structured P2P networks should follow the following secure routing primitives: (1) secure maintenance of routing tables, (2) secure routing of messages, and (3) secure identity assignment to nodes. But the first two problems depend in some way on the third one. If nodes’ identifiers can be chosen by users without any control, these networks can have security and operational problems. Therefore, like any other network or service, structured P2P networks require a robust access control to prevent potential attackers joining the network and a robust identity assignment system to guarantee their proper operation. In this thesis, firstly, we analyze the operation of the current structured P2P networks when managing identities in order to identify what security problems are related to the nodes’ identifiers within the overlay, and propose a series of requirements to be accomplished by any generated node ID to provide more security to a DHT-based structured P2P network. Secondly, we propose the use of implicit certificates to provide more security and to exploit the improvement in bandwidth, storage and performance that these certificates present compared to explicit certificates, design three protocols to assign nodes’ identifiers avoiding the identified problems, while maintaining user anonymity and allowing users’ traceability. Finally, we analyze the operation of the most used mechanisms to distribute revocation data in the Internet, with special focus on the proposed systems to work in P2P networks, and design a new mechanism to distribute revocation data more efficiently in a structured P2P network.Las redes P2P estructuradas fueron propuestas para solventar problemas de enrutamiento en infraestructuras de grandes dimensiones pero su nivel de seguridad lleva años siendo cuestionado por la comunidad investigadora. La mayor parte de los trabajos que intentan mejorar la seguridad de estas redes se han centrado en proporcionar encaminamiento seguro, sistemas de reputación, anonimato de los usuarios, etc. Sin embargo, la adecuada gestión de las identidades es un requisito sumamente importante para proporcionar los servicios mencionados anteriormente. La existencia de nodos anónimos y la falta de una autoridad centralizada capaz de monitorizar (y/o penalizar) a los nodos hace que estos sistemas sean más vulnerables que otros a comportamientos maliciosos por parte de los usuarios. Además, esos comportamientos inadecuados no pueden ser detectados proporcionando únicamente confidencialidad de los datos, autenticación de los nodos, no repudio, etc. Las redes P2P estructuradas deberían seguir las siguientes primitivas de enrutamiento seguro: (1) mantenimiento seguro de las tablas de enrutamiento, (2) enrutamiento seguro de los mensajes, and (3) asignación segura de las identidades. Pero la primera de los dos primitivas depende de alguna forma de la tercera. Si las identidades de los nodos pueden ser elegidas por sus usuarios sin ningún tipo de control, muy probablemente aparecerán muchos problemas de funcionamiento y seguridad. Por lo tanto, de la misma forma que otras redes y servicios, las redes P2P estructuradas requieren de un control de acceso robusto para prevenir la presencia de atacantes potenciales, y un sistema robusto de asignación de identidades para garantizar su adecuado funcionamiento. En esta tesis, primero de todo analizamos el funcionamiento de las redes P2P estructuradas basadas en el uso de DHTs (Tablas de Hash Distribuidas), cómo gestionan las identidades de sus nodos, identificamos qué problemas de seguridad están relacionados con la identificación de los nodos y proponemos una serie de requisitos para generar identificadores de forma segura. Más adelante proponemos el uso de certificados implícitos para proporcionar más seguridad y explotar las mejoras en consumo de ancho de banda, almacenamiento y rendimiento que proporcionan estos certificados en comparación con los certificados explícitos. También hemos diseñado tres protocolos de asignación segura de identidades, los cuales evitan la mayor parte de los problemas identificados mientras mantienen el anonimato de los usuarios y la trazabilidad. Finalmente hemos analizado el funcionamiento de la mayoría de los mecanismos utilizados para distribuir datos de revocación en Internet, con especial interés en los sistemas propuestos para operar en redes P2P, y hemos diseñado un nuevo mecanismo para distribuir datos de revocación de forma más eficiente en redes P2P estructuradas.Postprint (published version

    Group-Based Authentication Mechanisms for Vehicular Ad-Hoc Networks

    Get PDF
    Vehicular ad hoc networks (VANETs) provide opportunities to exchange traffic information among vehicles allowing drivers to not only adjust their routes but also prevent possible collisions. Due to the criticality of exchanged information, message authentication which will not expose the privacy of vehicles is required. The majority of current authentication schemes for VANETs depend primarily on public-key cryptography which brings extra overhead in terms of delay and requires infrastructure support for certificate verification. Symmetric-key based techniques can be more efficient, but they introduce significant key maintenance overheads. Herein, by considering the natural group behavior of vehicle communications, we propose an efficient and lightweight symmetric-key based authentication scheme for VANETs based on group communication. Expanding the protocol\u27s flexibility, we also propose an extension which integrates certain benefits of asymmetric-key techniques. We analyze the security properties of our proposed schemes to show there applicability when there is little to no infrastructure support. In addition, the proposed protocol was implemented and tested with real-world vehicle data. Simulation results confirmed the efficiency in terms of delay with respect to other proposed techniques

    Digital Rights Management for Personal Networks

    Get PDF
    The thesis is concerned with Digital Rights Management (DRM), and in particular with DRM for networks of devices owned by a single individual. This thesis focuses on the problem of preventing illegal copying of digital assets without jeopardising the right of legitimate licence holders to transfer content between their own devices, which collectively make up what we refer to as an authorised domain. An ideal list of DRM requirements is specified, which takes into account the points of view of users, content providers and copyright law. An approach is then developed for assessing DRM systems based on the defined DRM requirements; the most widely discussed DRM schemes are then analysed and assessed, where the main focus is on schemes which address the concept of an authorised domain. Based on this analysis we isolate the issues underlying the content piracy problem, and then provide a generic framework for a DRM system addressing the identified content piracy issues. The defined generic framework has been designed to avoid the weaknesses found in other schemes. The main contributions of this thesis include developing four new approaches that can be used to implement the proposed generic framework for managing an authorised domain. The four novel solutions all involve secure means for creating, managing and using a secure domain, which consists of all devices owned by a single owner. The schemes allow secure content sharing between devices in a domain, and prevent the illegal copying of content to devices outside the domain. In addition, each solution incorporates a method for binding a domain to a single owner, ensuring that only a single consumer owns and manages a domain. This enables binding of content licences to a single owner, thereby limiting illicit content proliferation. In the first solution, domain owners are authenticated using two-factor authentication, which involves "something the domain owner has", i.e. a master control device that controls and manages consumers domains, and binds devices joining a domain to itself, and "something the domain owner is or knows", i.e. a biometric or password/PIN authentication mechanism that is implemented by the master control device. In the second solution, domain owners are authenticated using their payment cards, building on existing electronic payment systems by ensuring that the name and the date of birth of a domain creator are the same for all devices joining a domain. In addition, this solution helps to protect consumers' privacy; unlike in existing electronic payment systems, payment card details are not exposed to third parties. The third solution involves the use of a domain-specific mobile phone and the mobile phone network operator to authenticate a domain owner before devices can join a domain. The fourth solution involves the use of location-based services, ensuring that devices joining a consumer domain are located in physical proximity to the addresses registered for this domain. This restricts domain membership to devices in predefined geographical locations, helping to ensure that a single consumer owns and manages each domain

    Data security in cloud storage services

    Get PDF
    Cloud Computing is considered to be the next-generation architecture for ICT where it moves the application software and databases to the centralized large data centers. It aims to offer elastic IT services where clients can benefit from significant cost savings of the pay-per-use model and can easily scale up or down, and do not have to make large investments in new hardware. However, the management of the data and services in this cloud model is under the control of the provider. Consequently, the cloud clients have less control over their outsourced data and they have to trust cloud service provider to protect their data and infrastructure from both external and internal attacks. This is especially true with cloud storage services. Nowadays, users rely on cloud storage as it offers cheap and unlimited data storage that is available for use by multiple devices (e.g. smart phones, tablets, notebooks, etc.). Besides famous cloud storage providers, such as Amazon, Google, and Microsoft, more and more third-party cloud storage service providers are emerging. These services are dedicated to offering more accessible and user friendly storage services to cloud customers. Examples of these services include Dropbox, Box.net, Sparkleshare, UbuntuOne or JungleDisk. These cloud storage services deliver a very simple interface on top of the cloud storage provided by storage service providers. File and folder synchronization between different machines, sharing files and folders with other users, file versioning as well as automated backups are the key functionalities of these emerging cloud storage services. Cloud storage services have changed the way users manage and interact with data outsourced to public providers. With these services, multiple subscribers can collaboratively work and share data without concerns about their data consistency, availability and reliability. Although these cloud storage services offer attractive features, many customers have not adopted these services. Since data stored in these services is under the control of service providers resulting in confidentiality and security concerns and risks. Therefore, using cloud storage services for storing valuable data depends mainly on whether the service provider can offer sufficient security and assurance to meet client requirements. From the way most cloud storage services are constructed, we can notice that these storage services do not provide users with sufficient levels of security leading to an inherent risk on users\u27 data from external and internal attacks. These attacks take the form of: data exposure (lack of data confidentiality); data tampering (lack of data integrity); and denial of data (lack of data availability) by third parties on the cloud or by the cloud provider himself. Therefore, the cloud storage services should ensure the data confidentiality in the following state: data in motion (while transmitting over networks), data at rest (when stored at provider\u27s disks). To address the above concerns, confidentiality and access controllability of outsourced data with strong cryptographic guarantee should be maintained. To ensure data confidentiality in public cloud storage services, data should be encrypted data before it is outsourced to these services. Although, users can rely on client side cloud storage services or software encryption tools for encrypting user\u27s data; however, many of these services fail to achieve data confidentiality. Box, for example, does not encrypt user files via SSL and within Box servers. Client side cloud storage services can intentionally/unintentionally disclose user decryption keys to its provider. In addition, some cloud storage services support convergent encryption for encrypting users\u27 data exposing it to “confirmation of a file attack. On the other hand, software encryption tools use full-disk encryption (FDE) which is not feasible for cloud-based file sharing services, because it encrypts the data as virtual hard disks. Although encryption can ensure data confidentiality; however, it fails to achieve fine-grained access control over outsourced data. Since, public cloud storage services are managed by un-trusted cloud service provider, secure and efficient fine-grained access control cannot be realized through these services as these policies are managed by storage services that have full control over the sharing process. Therefore, there is not any guarantee that they will provide good means for efficient and secure sharing and they can also deduce confidential information about the outsourced data and users\u27 personal information. In this work, we would like to improve the currently employed security measures for securing data in cloud store services. To achieve better data confidentiality for data stored in the cloud without relying on cloud service providers (CSPs) or putting any burden on users, in this thesis, we designed a secure cloud storage system framework that simultaneously achieves data confidentiality, fine-grained access control on encrypted data and scalable user revocation. This framework is built on a third part trusted (TTP) service that can be employed either locally on users\u27 machine or premises, or remotely on top of cloud storage services. This service shall encrypts users data before uploading it to the cloud and decrypts it after downloading from the cloud; therefore, it remove the burden of storing, managing and maintaining encryption/decryption keys from data owner\u27s. In addition, this service only retains user\u27s secret key(s) not data. Moreover, to ensure high security for these keys, it stores them on hardware device. Furthermore, this service combines multi-authority ciphertext policy attribute-based encryption (CP-ABE) and attribute-based Signature (ABS) for achieving many-read-many-write fine-grained data access control on storage services. Moreover, it efficiently revokes users\u27 privileges without relying on the data owner for re-encrypting massive amounts of data and re-distributing the new keys to the authorized users. It removes the heavy computation of re-encryption from users and delegates this task to the cloud service provider (CSP) proxy servers. These proxy servers achieve flexible and efficient re-encryption without revealing underlying data to the cloud. In our designed architecture, we addressed the problem of ensuring data confidentiality against cloud and against accesses beyond authorized rights. To resolve these issues, we designed a trusted third party (TTP) service that is in charge of storing data in an encrypted format in the cloud. To improve the efficiency of the designed architecture, the service allows the users to choose the level of severity of the data and according to this level different encryption algorithms are employed. To achieve many-read-many-write fine grained access control, we merge two algorithms (multi-authority ciphertext policy attribute-based encryption (MA- CP-ABE) and attribute-based Signature (ABS)). Moreover, we support two levels of revocation: user and attribute revocation so that we can comply with the collaborative environment. Last but not least, we validate the effectiveness of our design by carrying out a detailed security analysis. This analysis shall prove the correctness of our design in terms of data confidentiality each stage of user interaction with the cloud

    Variants of Group Signatures and Their Applications

    Get PDF

    Optimistic fair exchange

    Get PDF
    A fair exchange guarantees that a participant only reveals its items (such as signatures, payments, or data) if it receives the expected items in exchange. Efficient fair exchange requires a so-called third party, which is assumed to be correct. Optimistic fair exchange involves this third party only if needed, i.e., if the participants cheat or disagree. In Part I, we prove lower bounds on the message and time complexity of two particular instances of fair exchange in varying models, namely contract signing (fair exchange of two signatures under a contract) and certified mail (fair exchange of data for a receipt). We show that all given bounds are tight by describing provably time- and message-optimal protocols for all considered models and instances. In Part II, we have a closer look at formalizing the security of fair exchange. We introduce a new formal notion of security (including secrecy) for reactive distributed systems. We illustrate this new formalism by a specification of certified mail as an alternative to the traditional specification given in Part I. In Part III, we describe protocols for generic and optimistic fair exchange of arbitrary items. These protocols are embedded into the SEMPER Fair Exchange Layer, which is a central part of the SEMPER Framework for Secure Electronic Commerce.Ein Austausch ist fair, wenn eine Partei die angebotenen Güter, wie zum Beispiel digitale Signaturen, Zahlungen oder Daten, nur abgibt, wenn sie die erwarteten Güter im Tausch erhält. Ohne eine als korrekt angenommene dritte Partei, welche eine mit einem Notar vergleichbare Rolle übernimmt, ist fairer Austausch nicht effizient möglich. Ein fairer Austausch heißt optimistisch, falls diese dritte Partei nur in Problemfällen am Protokoll teilnimmt. In Teil I werden beweisbar zeit- und nachrichtenoptimale Protokolle für die Spezialfälle \u27;elektronische Vertragsunterzeichnung" (fairer Austausch zweier Signaturen; engl. contract signing) und \u27;elektronisches Einschreiben" (fairer Austausch von Daten gegen eine Quittung; engl. certified mail) von fairem Austausch vorgestellt. Teil II beschreibt einen neuen Integritäts- und Geheimhaltungsbegriff für reaktive Systeme. Dieser basiert auf einer Vergleichsrelation \u27;so sicher wie", welche die Sicherheit zweier Systeme vergleicht. Ein verteiltes, reaktives System wird dann als sicher bezeichnet, wenn es so sicher wie ein idealisiertes System (engl. trusted host) für diesen Dienst ist. Mit diesem Formalismus geben wir eine alternative Sicherheitsdefinition von \u27;elektronischem Einschreiben" an, deren Semantik im Gegensatz zu der in Teil I beschriebenen Definition nun unabhängig vom erbrachten Dienst ist. Teil III beschreibt ein Design und optimistische Protokolle für generischen fairen Austausch von zwei beliebigen Gütern und den darauf aufbauenden SEMPER Fair Exchange Layer. Dieser ist ein wesentlicher Baustein des SEMPER Framework for Secure Electronic Commerce

    Optimistic fair exchange

    Get PDF
    A fair exchange guarantees that a participant only reveals its items (such as signatures, payments, or data) if it receives the expected items in exchange. Efficient fair exchange requires a so-called third party, which is assumed to be correct. Optimistic fair exchange involves this third party only if needed, i.e., if the participants cheat or disagree. In Part I, we prove lower bounds on the message and time complexity of two particular instances of fair exchange in varying models, namely contract signing (fair exchange of two signatures under a contract) and certified mail (fair exchange of data for a receipt). We show that all given bounds are tight by describing provably time- and message-optimal protocols for all considered models and instances. In Part II, we have a closer look at formalizing the security of fair exchange. We introduce a new formal notion of security (including secrecy) for reactive distributed systems. We illustrate this new formalism by a specification of certified mail as an alternative to the traditional specification given in Part I. In Part III, we describe protocols for generic and optimistic fair exchange of arbitrary items. These protocols are embedded into the SEMPER Fair Exchange Layer, which is a central part of the SEMPER Framework for Secure Electronic Commerce.Ein Austausch ist fair, wenn eine Partei die angebotenen Güter, wie zum Beispiel digitale Signaturen, Zahlungen oder Daten, nur abgibt, wenn sie die erwarteten Güter im Tausch erhält. Ohne eine als korrekt angenommene dritte Partei, welche eine mit einem Notar vergleichbare Rolle übernimmt, ist fairer Austausch nicht effizient möglich. Ein fairer Austausch heißt optimistisch, falls diese dritte Partei nur in Problemfällen am Protokoll teilnimmt. In Teil I werden beweisbar zeit- und nachrichtenoptimale Protokolle für die Spezialfälle ';elektronische Vertragsunterzeichnung" (fairer Austausch zweier Signaturen; engl. contract signing) und ';elektronisches Einschreiben" (fairer Austausch von Daten gegen eine Quittung; engl. certified mail) von fairem Austausch vorgestellt. Teil II beschreibt einen neuen Integritäts- und Geheimhaltungsbegriff für reaktive Systeme. Dieser basiert auf einer Vergleichsrelation ';so sicher wie", welche die Sicherheit zweier Systeme vergleicht. Ein verteiltes, reaktives System wird dann als sicher bezeichnet, wenn es so sicher wie ein idealisiertes System (engl. trusted host) für diesen Dienst ist. Mit diesem Formalismus geben wir eine alternative Sicherheitsdefinition von ';elektronischem Einschreiben" an, deren Semantik im Gegensatz zu der in Teil I beschriebenen Definition nun unabhängig vom erbrachten Dienst ist. Teil III beschreibt ein Design und optimistische Protokolle für generischen fairen Austausch von zwei beliebigen Gütern und den darauf aufbauenden SEMPER Fair Exchange Layer. Dieser ist ein wesentlicher Baustein des SEMPER Framework for Secure Electronic Commerce

    Techniques for Application-Aware Suitability Analysis of Access Control Systems

    Get PDF
    Access control, the process of selectively restricting access to a set of resources, is so fundamental to computer security that it has been called the field's traditional center of gravity. As such, a wide variety of systems have been proposed for representing, managing, and enforcing access control policies. Prior work on evaluating access control systems has primarily relied on relative expressiveness analysis, which proves that one system has greater capabilities than another. Although expressiveness is a meaningful basis for comparing access control systems, it does not consider the application in which the system will be deployed. Furthermore, expressiveness is not necessarily a useful way to rank systems; if two systems are expressive enough for a given application, little benefit is derived from choosing the one that has greater expressiveness. On the contrary, many of the concerns that arise when choosing an access control system can be negatively impacted by additional expressiveness: a system that is too complex is often harder to specify policies in, less efficient, or harder to reason about from the perspective of security guarantees. To address these shortcomings, we propose the access control suitability analysis problem, and present a series of techniques for solving it. Suitability analysis evaluates access control systems against the specific demands of the application within which they will be used, and considers a wide range of both expressiveness and ordered cost metrics. To conduct suitability analysis, we present a two-phase framework consisting of formal reductions for proving qualitative suitability and simulation techniques for evaluating quantitative suitability. In support of this framework we present a fine-grained lattice of reduction properties, as well as Portuno, a flexible simulation engine for conducting cost analysis of access control systems. We evaluate our framework formally, by proving that it satisfies a series of technical requirements, and practically, by presenting several case studies demonstrating its use in conducting analysis in realistic scenarios

    Revocable Identity-Based Encryption with Rejoin Functionality

    No full text
    corecore