2,907 research outputs found

    A Touch of Evil: High-Assurance Cryptographic Hardware from Untrusted Components

    Get PDF
    The semiconductor industry is fully globalized and integrated circuits (ICs) are commonly defined, designed and fabricated in different premises across the world. This reduces production costs, but also exposes ICs to supply chain attacks, where insiders introduce malicious circuitry into the final products. Additionally, despite extensive post-fabrication testing, it is not uncommon for ICs with subtle fabrication errors to make it into production systems. While many systems may be able to tolerate a few byzantine components, this is not the case for cryptographic hardware, storing and computing on confidential data. For this reason, many error and backdoor detection techniques have been proposed over the years. So far all attempts have been either quickly circumvented, or come with unrealistically high manufacturing costs and complexity. This paper proposes Myst, a practical high-assurance architecture, that uses commercial off-the-shelf (COTS) hardware, and provides strong security guarantees, even in the presence of multiple malicious or faulty components. The key idea is to combine protective-redundancy with modern threshold cryptographic techniques to build a system tolerant to hardware trojans and errors. To evaluate our design, we build a Hardware Security Module that provides the highest level of assurance possible with COTS components. Specifically, we employ more than a hundred COTS secure crypto-coprocessors, verified to FIPS140-2 Level 4 tamper-resistance standards, and use them to realize high-confidentiality random number generation, key derivation, public key decryption and signing. Our experiments show a reasonable computational overhead (less than 1% for both Decryption and Signing) and an exponential increase in backdoor-tolerance as more ICs are added

    New Conditional Privacy-preserving Encryption Schemes in Communication Network

    Get PDF
    Nowadays the communication networks have acted as nearly the most important fundamental infrastructure in our human society. The basic service provided by the communication networks are like that provided by the ubiquitous public utilities. For example, the cable television network provides the distribution of information to its subscribers, which is much like the water or gas supply systems which distribute the commodities to citizens. The communication network also facilitates the development of many network-based applications such as industrial pipeline controlling in the industrial network, voice over long-term evolution (VoLTE) in the mobile network and mixture reality (MR) in the computer network, etc. Since the communication network plays such a vital role in almost every aspect of our life, undoubtedly, the information transmitted over it should be guarded properly. Roughly, such information can be categorized into either the communicated message or the sensitive information related to the users. Since we already got cryptographical tools, such as encryption schemes, to ensure the confidentiality of communicated messages, it is the sensitive personal information which should be paid special attentions to. Moreover, for the benefit of reducing the network burden in some instances, it may require that only communication information among legitimated users, such as streaming media service subscribers, can be stored and then relayed in the network. In this case, the network should be empowered with the capability to verify whether the transmitted message is exchanged between legitimated users without leaking the privacy of those users. Meanwhile, the intended receiver of a transmitted message should be able to identify the exact message sender for future communication. In order to cater to those requirements, we re-define a notion named conditional user privacy preservation. In this thesis, we investigate the problem how to preserve user conditional privacy in pubic key encryption schemes, which are used to secure the transmitted information in the communication networks. In fact, even the term conditional privacy preservation has appeared in existing works before, there still have great differences between our conditional privacy preservation definition and the one proposed before. For example, in our definition, we do not need a trusted third party (TTP) to help tracing the sender of a message. Besides, the verification of a given encrypted message can be done without any secret. In this thesis, we also introduce more desirable features to our redefined notion user conditional privacy preservation. In our second work, we consider not only the conditional privacy of the message sender but also that of the intended message receiver. This work presents a new encryption scheme which can be implemented in communication networks where there exists a blacklist containing a list of blocked communication channels, and each of them is established by a pair of sender and receiver. With this encryption scheme, a verifier can confirm whether one ciphertext is belonging to a legitimated communication channel without knowing the exact sender and receiver of that ciphertext. With our two previous works, for a given ciphertext, we ensure that no one except its intended receiver can identify the sender. However, the receiver of one message may behave dishonest when it tries to retrieve the real message sender, which incurs the problem that the receiver of a message might manipulate the origin of the message successfully for its own benefit. To tackle this problem, we present a novel encryption scheme in our third work. Apart from preserving user conditional privacy, this work also enforces the receiver to give a publicly verifiable proof so as to convince others that it is honest during the process of identifying the actual message sender. In our forth work, we show our special interest in the access control encryption, or ACE for short, and find this primitive can inherently achieve user conditional privacy preservation to some extent. we present a newly constructed ACE scheme in this work, and our scheme has advantages over existing ACE schemes in two aspects. Firstly, our ACE scheme is more reliable than existing ones since we utilize a distributed sanitizing algorithm and thus avoid the so called single point failure happened in ACE systems with only one sanitizer. Then, since the ciphertext and key size of our scheme is more compact than that of the existing ACE schemes, our scheme enjoys better scalability

    Benchmarking Summarizability Processing in XML Warehouses with Complex Hierarchies

    Full text link
    Business Intelligence plays an important role in decision making. Based on data warehouses and Online Analytical Processing, a business intelligence tool can be used to analyze complex data. Still, summarizability issues in data warehouses cause ineffective analyses that may become critical problems to businesses. To settle this issue, many researchers have studied and proposed various solutions, both in relational and XML data warehouses. However, they find difficulty in evaluating the performance of their proposals since the available benchmarks lack complex hierarchies. In order to contribute to summarizability analysis, this paper proposes an extension to the XML warehouse benchmark (XWeB) with complex hierarchies. The benchmark enables us to generate XML data warehouses with scalable complex hierarchies as well as summarizability processing. We experimentally demonstrated that complex hierarchies can definitely be included into a benchmark dataset, and that our benchmark is able to compare two alternative approaches dealing with summarizability issues.Comment: 15th International Workshop on Data Warehousing and OLAP (DOLAP 2012), Maui : United States (2012

    On the Cryptographic Hardness of Local Search

    Get PDF
    We show new hardness results for the class of Polynomial Local Search problems (PLS): - Hardness of PLS based on a falsifiable assumption on bilinear groups introduced by Kalai, Paneth, and Yang (STOC 2019), and the Exponential Time Hypothesis for randomized algorithms. Previous standard model constructions relied on non-falsifiable and non-standard assumptions. - Hardness of PLS relative to random oracles. The construction is essentially different than previous constructions, and in particular is unconditionally secure. The construction also demonstrates the hardness of parallelizing local search. The core observation behind the results is that the unique proofs property of incrementally-verifiable computations previously used to demonstrate hardness in PLS can be traded with a simple incremental completeness property

    Oblivion: Mitigating Privacy Leaks by Controlling the Discoverability of Online Information

    Get PDF
    Search engines are the prevalently used tools to collect information about individuals on the Internet. Search results typically comprise a variety of sources that contain personal information -- either intentionally released by the person herself, or unintentionally leaked or published by third parties, often with detrimental effects on the individual's privacy. To grant individuals the ability to regain control over their disseminated personal information, the European Court of Justice recently ruled that EU citizens have a right to be forgotten in the sense that indexing systems, must offer them technical means to request removal of links from search results that point to sources violating their data protection rights. As of now, these technical means consist of a web form that requires a user to manually identify all relevant links upfront and to insert them into the web form, followed by a manual evaluation by employees of the indexing system to assess if the request is eligible and lawful. We propose a universal framework Oblivion to support the automation of the right to be forgotten in a scalable, provable and privacy-preserving manner. First, Oblivion enables a user to automatically find and tag her disseminated personal information using natural language processing and image recognition techniques and file a request in a privacy-preserving manner. Second, Oblivion provides indexing systems with an automated and provable eligibility mechanism, asserting that the author of a request is indeed affected by an online resource. The automated ligibility proof ensures censorship-resistance so that only legitimately affected individuals can request the removal of corresponding links from search results. We have conducted comprehensive evaluations, showing that Oblivion is capable of handling 278 removal requests per second, and is hence suitable for large-scale deployment

    Function-specific schemes for verifiable computation

    Get PDF
    An integral component of modern computing is the ability to outsource data and computation to powerful remote servers, for instance, in the context of cloud computing or remote file storage. While participants can benefit from this interaction, a fundamental security issue that arises is that of integrity of computation: How can the end-user be certain that the result of a computation over the outsourced data has not been tampered with (not even by a compromised or adversarial server)? Cryptographic schemes for verifiable computation address this problem by accompanying each result with a proof that can be used to check the correctness of the performed computation. Recent advances in the field have led to the first implementations of schemes that can verify arbitrary computations. However, in practice the overhead of these general-purpose constructions remains prohibitive for most applications, with proof computation times (at the server) in the order of minutes or even hours for real-world problem instances. A different approach for designing such schemes targets specific types of computation and builds custom-made protocols, sacrificing generality for efficiency. An important representative of this function-specific approach is an authenticated data structure (ADS), where a specialized protocol is designed that supports query types associated with a particular outsourced dataset. This thesis presents three novel ADS constructions for the important query types of set operations, multi-dimensional range search, and pattern matching, and proves their security under cryptographic assumptions over bilinear groups. The scheme for set operations can support nested queries (e.g., two unions followed by an intersection of the results), extending previous works that only accommodate a single operation. The range search ADS provides an exponential (in the number of attributes in the dataset) asymptotic improvement from previous schemes for storage and computation costs. Finally, the pattern matching ADS supports text pattern and XML path queries with minimal cost, e.g., the overhead at the server is less than 4% compared to simply computing the result, for all our tested settings. The experimental evaluation of all three constructions shows significant improvements in proof-computation time over general-purpose schemes
    • …
    corecore