11 research outputs found

    Security, Trust and Privacy in Cyber (STPCyber): Future trends and challenges

    Full text link
    © 2020 Today's world experiences massively interconnected devices to share information across variety of platforms between traditional computers (machines), Smart IoT devices used across smart homes, smart interconnected vehicles etc. and of course the social networks apps such as Facebook, Linkdn, twitter etc. We experience the growth has been skyrocketing and the trend will continue exponentially to the future. At one end, we find life becomes easier with such developments and at the other end; we experience more and more cyber threats on our privacy, security and trustworthiness with organizations holding our data. In this special issue, we summarize contributions by authors in advanced topics related to security, trust and privacy based on a range of applications and present a selection of the most recent research efforts in these areas

    Private Eyes: Zero-Leakage Iris Searchable Encryption

    Get PDF
    Biometric databases are being deployed with few cryptographic protections. Because of the nature of biometrics, privacy breaches affect users for their entire life. This work introduces Private Eyes, the first zero-leakage biometric database. The only leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from symmetric searchable encryption. Proximity queries are the required functionality: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric. Private Eyes combines locality sensitive-hashing or LSHs (Indyk and Motwani, STOC 1998) and encrypted maps. One searches for the disjunction of the LSHs of a noisy biometric reading. The underlying encrypted map needs to efficiently answer disjunction queries. We focus on the iris biometric. Iris biometric data requires a large number of LSHs, approximately 1000. The most relevant prior work is in zero-leakage k-nearest-neighbor search (Boldyreva and Tang, PoPETS 2021), but that work is designed for a small number of LSHs. Our main cryptographic tool is a zero-leakage disjunctive map designed for the setting when most clauses do not match any records. For the iris, on average at most 6% of LSHs match any stored value. To aid in evaluation, we produce a synthetic iris generation tool to evaluate sizes beyond available iris datasets. This generation tool is a simple generative adversarial network. Accurate statistics are crucial to optimizing the cryptographic primitives so this tool may be of independent interest. Our scheme is implemented and open-sourced. For the largest tested parameters of 5000 stored irises, search requires 26 rounds of communication and 26 minutes of single-threaded computation

    Practical yet Provably Secure: Complex Database Query Execution over Encrypted Data

    Get PDF
    Encrypted databases provide security for outsourced data. In this work novel encryption schemes supporting different database query types are presented enabling complex database queries over encrypted data. For specific constructions enabling exact keyword queries, range queries, database joins and substring queries over encrypted data we prove security in a formal framework, present a theoretical runtime analysis and provide an assessment of practical performance characteristics

    A Systematic Review on the Status and Progress of Homomorphic Encryption Technologies

    Get PDF
    With the emergence of big data and the continued growth in cloud computing applications, serious security and privacy concerns emerged. Consequently, several researchers and cybersecurity experts have embarked on a quest to extend data encryption to big data systems and cloud computing applications. As most cloud users turn to using public cloud services, confidentiality becomes and even more complicated issue. Cloud clients storing their data on a public cloud always seek solutions to confidentiality problem. Homomorphic encryption emerged as a possible solution where client’s data is encrypted on the cloud in a way that allows some search and manipulation operations without proper decryption. In this paper, we present a systematic review of research paper published in the field of homomorphic encryption. This paper uses PRISMA checklist alongside some items of Cochrane’s Quality Assessment to review studies retrieved from various resources. It was highly noticeable in the reviewed papers that security in big data and cloud computing has received most attention. Most papers suggested the use of homomorphic encryption although the thematic analysis has identified other potential concerns. Regarding the quality of the articles, 38% of the articles failed to meet three checklist items, including explicit statement of research objectives, procedure recognition and sources of funding used in the study. The review also presented compendium textual analysis of different homomorphic encryption algorithms, application areas, and areas of future developments. Results of the evaluation through PRISMA and the Cochrane tool showed that a majority of research articles discussed the potential use and application of Homomorphic Encryption as a solution to the growing demands of big data and absence of security and privacy mechanisms therein. This was evident from 26 of the total 59 articles that met the inclusion criteria. The term Homomorphic Encryption appeared 1802 times in the word cloud derived from the selected articles, which speaks of its potential to ensure security and privacy, while also preserving the CIA triad in the context of big data and cloud computing

    HIR-CP-ABE: Hierarchical Identity Revocable Ciphertext-Policy Attribute-Based Encryption for Secure and Flexible Data Sharing

    Get PDF
    Ciphertext Policy Attribute-Based Encryption (CP- ABE) has been proposed to implement the attribute-based access control model. In CP-ABE, data owners encrypt the data with a certain access policy such that only data users whose attributes satisfy the access policy could obtain the corresponding private decryption key from a trusted authority. Therefore, CP-ABE is considered as a promising fine-grained access control mechanism for data sharing where no centralized trusted third party exists, for example, cloud computing, mobile ad hoc networks (MANET), Peer-to-Peer (P2P) networks, information centric networks (ICN), etc.. As promising as it is, user revocation is a cumbersome problem in CP-ABE, thus impeding its application in practice. To solve this problem, we propose a new scheme named HIR-CP-ABE, which implements hierarchical identity- based user revocation from the perceptive of encryption. In particular, the revocation is implemented by data owners directly without any help from any third party. Compared with previous attribute-based revocation solutions, our scheme provides the following nice properties. First, the trusted authority could be offline after system setup and key distribution, thus making it applicable in mobile ad hoc networks, P2P networks, etc., where the nodes in the network are unable to connect to the trusted authority after system deployment. Second, a user does not need to update the private key when user revocation occurs. Therefore, key management overhead is much lower in HIR-CP-ABE for both the users and the trusted authority. Third, the revocation mechanism enables to revoke a group of users affiliated with the same organization in a batch without influencing any other users. To the best of our knowledge, HIR-CP-ABE is the first CP-ABE scheme to provide affiliation-based revocation functionality for data owners. Through security analysis and performance evaluation, we show that the proposed scheme is secure and efficient in terms of computation, communication and storage

    Towards Practical Privacy-Preserving Protocols

    Get PDF
    Protecting users' privacy in digital systems becomes more complex and challenging over time, as the amount of stored and exchanged data grows steadily and systems become increasingly involved and connected. Two techniques that try to approach this issue are Secure Multi-Party Computation (MPC) and Private Information Retrieval (PIR), which aim to enable practical computation while simultaneously keeping sensitive data private. In this thesis we present results showing how real-world applications can be executed in a privacy-preserving way. This is not only desired by users of such applications, but since 2018 also based on a strong legal foundation with the General Data Protection Regulation (GDPR) in the European Union, that forces companies to protect the privacy of user data by design. This thesis' contributions are split into three parts and can be summarized as follows: MPC Tools Generic MPC requires in-depth background knowledge about a complex research field. To approach this, we provide tools that are efficient and usable at the same time, and serve as a foundation for follow-up work as they allow cryptographers, researchers and developers to implement, test and deploy MPC applications. We provide an implementation framework that abstracts from the underlying protocols, optimized building blocks generated from hardware synthesis tools, and allow the direct processing of Hardware Definition Languages (HDLs). Finally, we present an automated compiler for efficient hybrid protocols from ANSI C. MPC Applications MPC was for a long time deemed too expensive to be used in practice. We show several use cases of real-world applications that can operate in a privacy-preserving, yet practical way when engineered properly and built on top of suitable MPC protocols. Use cases presented in this thesis are from the domain of route computation using BGP on the Internet or at Internet Exchange Points (IXPs). In both cases our protocols protect sensitive business information that is used to determine routing decisions. Another use case focuses on genomics, which is particularly critical as the human genome is connected to everyone during their entire lifespan and cannot be altered. Our system enables federated genomic databases, where several institutions can privately outsource their genome data and where research institutes can query this data in a privacy-preserving manner. PIR and Applications Privately retrieving data from a database is a crucial requirement for user privacy and metadata protection, and is enabled amongst others by a technique called Private Information Retrieval (PIR). We present improvements and a generalization of a well-known multi-server PIR scheme of Chor et al., and an implementation and evaluation thereof. We also design and implement an efficient anonymous messaging system built on top of PIR. Furthermore we provide a scalable solution for private contact discovery that utilizes ideas from efficient two-server PIR built from Distributed Point Functions (DPFs) in combination with Private Set Intersection (PSI)

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms
    corecore