124,408 research outputs found

    Semantic Security on Wiretap Channels using Universal Hashing with Fading Applications

    Full text link
    We furnish a procedure based on universal hash families (UHFs) that can convert an error correcting coding scheme (ECC) of rate RR into a semantically secure wiretap coding scheme of rate R−ξR - \xi where ξ\xi is a parameter derived from the eavesdropper's point-to-point channel. This conversion is shown to be polynomial time efficient with block length and is applicable to any channel. When an ECC is chosen, our procedure induces a wiretap coding scheme that is concrete and efficient as long as the ECC is also such. To prove this induced wiretap coding scheme is semantically secure, we have constructed bounds on the information leaked to the eavesdropper. Our construction is an upgrade of bounds from recent literature: the novelty here being that our leakage bounds hold for any message distribution. Indeed, our wiretap procedure using UHFs and our characterization of its semantic leakage is the first main contribution of this work. The other main contribution of this work is as follows. We apply the aforementioned procedure to a variety of wiretap channels in order to show the procedure's efficacy, and as a result of such applications, our results establish new achievable semantically secure rates

    Plausible Deniability over Broadcast Channels

    Full text link
    In this paper, we introduce the notion of Plausible Deniability in an information theoretic framework. We consider a scenario where an entity that eavesdrops through a broadcast channel summons one of the parties in a communication protocol to reveal their message (or signal vector). It is desirable that the summoned party have enough freedom to produce a fake output that is likely plausible given the eavesdropper's observation. We examine three variants of this problem -- Message Deniability, Transmitter Deniability, and Receiver Deniability. In the first setting, the message sender is summoned to produce the sent message. Similarly, in the second and third settings, the transmitter and the receiver are required to produce the transmitted codeword, and the received vector respectively. For each of these settings, we examine the maximum communication rate that allows a given minimum rate of plausible fake outputs. For the Message and Transmitter Deniability problems, we fully characterise the capacity region for general broadcast channels, while for the Receiver Deniability problem, we give an achievable rate region for physically degraded broadcast channels.Comment: Submitted to IEEE Transactions on Information Theory. A short version of this paper has been accepted to International Symposium on Information Theory 201

    On the Cryptanalysis via Approximation of Cryptographic Primitives Relying on the Planted Clique Conjecture

    Full text link
    While the reliable use of some NP-complete problem in tandem with the assumption that P is not equal to NP has eluded cryptographers due to lack of results showing average-case hardness, one alternative which has been explored is reliance on assumptions that solving certain NP-hard optimization problems within some degree of accuracy is computationally difficult in specific instance classes. In this work, we explore one such example of this effort which attempts to provide cryptographic primitives by relying on the planted clique conjecture. More specifically, we (1) present this construction in summary, (2) propose a simple cryptanalytic method using only approximation algorithms, and (3) consider the feasibility of such cryptanalysis in the context of existing approximation algorithms for the maximum clique problem. We ultimately find that recent advances in the area of combinatoric approximation algorithms fatally hinders the prospect of any serious application of existing candidate constructions based upon the planted clique conjecture

    Cloud Forensics: A Meta-Study of Challenges, Approaches, and Open Problems

    Full text link
    In recent years, cloud computing has become popular as a cost-effective and efficient computing paradigm. Unfortunately, today's cloud computing architectures are not designed for security and forensics. To date, very little research has been done to develop the theory and practice of cloud forensics. Many factors complicate forensic investigations in a cloud environment. First, the storage system is no longer local. Therefore, even with a subpoena, law enforcement agents cannot confiscate the suspect's computer and get access to the suspect's files. Second, each cloud server contains files from many users. Hence, it is not feasible to seize servers from a data center without violating the privacy of many other users. Third, even if the data belonging to a particular suspect is identified, separating it from other users' data is difficult. Moreover, other than the cloud provider's word, there is usually no evidence that links a given data file to a particular suspect. For such challenges, clouds cannot be used to store healthcare, business, or national security related data, which require audit and regulatory compliance. In this paper, we systematically examine the cloud forensics problem and explore the challenges and issues in cloud forensics. We then discuss existing research projects and finally, we highlight the open problems and future directions in cloud forensics research area. We posit that our systematic approach towards understanding the nature and challenges of cloud forensics will allow us to examine possible secure solution approaches, leading to increased trust on and adoption of cloud computing, especially in business, healthcare, and national security. This in turn will lead to lower cost and long-term benefit to our society as a whole

    Secure Group Testing

    Full text link
    The principal goal of Group Testing (GT) is to identify a small subset of "defective" items from a large population, by grouping items into as few test pools as possible. The test outcome of a pool is positive if it contains at least one defective item, and is negative otherwise. GT algorithms are utilized in numerous applications, and in many of them maintaining the privacy of the tested items, namely, keeping secret whether they are defective or not, is critical. In this paper, we consider a scenario where there is an eavesdropper (Eve) who is able to observe a subset of the GT outcomes (pools). We propose a new non-adaptive Secure Group Testing (SGT) scheme based on information-theoretic principles. The new proposed test design keeps the eavesdropper ignorant regarding the items' status. Specifically, when the fraction of tests observed by Eve is 0≤δ<10 \leq \delta <1, we prove that with the naive Maximum Likelihood (ML) decoding algorithm the number of tests required for both correct reconstruction at the legitimate user (with high probability) and negligible information leakage to Eve is 11−δ\frac{1}{1-\delta} times the number of tests required with no secrecy constraint for the fixed KK regime. By a matching converse, we completely characterize the Secure GT capacity. Moreover, we consider the Definitely Non-Defective (DND) computationally efficient decoding algorithm, proposed in the literature for non-secure GT. We prove that with the new secure test design, for δ<1/2\delta < 1/2, the number of tests required, without any constraint on KK, is at most 11/2−δ\frac{1}{1/2-\delta} times the number of tests required with no secrecy constraint

    The Security Rule

    Get PDF

    The Pyramid Scheme: Oblivious RAM for Trusted Processors

    Full text link
    Modern processors, e.g., Intel SGX, allow applications to isolate secret code and data in encrypted memory regions called enclaves. While encryption effectively hides the contents of memory, the sequence of address references issued by the secret code leaks information. This is a serious problem because these leaks can easily break the confidentiality guarantees of enclaves. In this paper, we explore Oblivious RAM (ORAM) designs that prevent these information leaks under the constraints of modern SGX processors. Most ORAMs are a poor fit for these processors because they have high constant overhead factors or require large private memories, which are not available in these processors. We address these limitations with a new hierarchical ORAM construction, the Pyramid ORAM, that is optimized towards online bandwidth cost and small blocks. It uses a new hashing scheme that circumvents the complexity of previous hierarchical schemes. We present an efficient x64-optimized implementation of Pyramid ORAM that uses only the processor's registers as private memory. We compare Pyramid ORAM with Circuit ORAM, a state-of-the-art tree-based ORAM scheme that also uses constant private memory. Pyramid ORAM has better online asymptotical complexity than Circuit ORAM. Our implementation of Pyramid ORAM and Circuit ORAM validates this: as all hierarchical schemes, Pyramid ORAM has high variance of access latencies; although latency can be high for some accesses, for typical configurations Pyramid ORAM provides access latencies that are 8X better than Circuit ORAM for 99% of accesses. Although the best known hierarchical ORAM has better asymptotical complexity, Pyramid ORAM has significantly lower constant overhead factors, making it the preferred choice in practice

    Oblivious Query Processing

    Full text link
    Motivated by cloud security concerns, there is an increasing interest in database systems that can store and support queries over encrypted data. A common architecture for such systems is to use a trusted component such as a cryptographic co-processor for query processing that is used to securely decrypt data and perform computations in plaintext. The trusted component has limited memory, so most of the (input and intermediate) data is kept encrypted in an untrusted storage and moved to the trusted component on ``demand.'' In this setting, even with strong encryption, the data access pattern from untrusted storage has the potential to reveal sensitive information; indeed, all existing systems that use a trusted component for query processing over encrypted data have this vulnerability. In this paper, we undertake the first formal study of secure query processing, where an adversary having full knowledge of the query (text) and observing the query execution learns nothing about the underlying database other than the result size of the query on the database. We introduce a simpler notion, oblivious query processing, and show formally that a query admits secure query processing iff it admits oblivious query processing. We present oblivious query processing algorithms for a rich class of database queries involving selections, joins, grouping and aggregation. For queries not handled by our algorithms, we provide some initial evidence that designing oblivious (and therefore secure) algorithms would be hard via reductions from two simple, well-studied problems that are generally believed to be hard. Our study of oblivious query processing also reveals interesting connections to database join theory

    Online Multivariate Anomaly Detection and Localization for High-dimensional Settings

    Full text link
    This paper considers the real-time detection of anomalies in high-dimensional systems. The goal is to detect anomalies quickly and accurately so that the appropriate countermeasures could be taken in time, before the system possibly gets harmed. We propose a sequential and multivariate anomaly detection method that scales well to high-dimensional datasets. The proposed method follows a nonparametric, i.e., data-driven, and semi-supervised approach, i.e., trains only on nominal data. Thus, it is applicable to a wide range of applications and data types. Thanks to its multivariate nature, it can quickly and accurately detect challenging anomalies, such as changes in the correlation structure and stealth low-rate cyberattacks. Its asymptotic optimality and computational complexity are comprehensively analyzed. In conjunction with the detection method, an effective technique for localizing the anomalous data dimensions is also proposed. We further extend the proposed detection and localization methods to a supervised setup where an additional anomaly dataset is available, and combine the proposed semi-supervised and supervised algorithms to obtain an online learning algorithm under the semi-supervised framework. The practical use of proposed algorithms are demonstrated in DDoS attack mitigation, and their performances are evaluated using a real IoT-botnet dataset and simulations.Comment: 16 pages, LaTeX; references adde

    STAR: Statistical Tests with Auditable Results

    Full text link
    We present STAR: a novel system aimed at solving the complex issue of "p-hacking" and false discoveries in scientific studies. STAR provides a concrete way for ensuring the application of false discovery control procedures in hypothesis testing, using mathematically provable guarantees, with the goal of reducing the risk of data dredging. STAR generates an efficiently auditable certificate which attests to the validity of each statistical test performed on a dataset. STAR achieves this by using several cryptographic techniques which are combined specifically for this purpose. Under-the-hood, STAR uses a decentralized set of authorities (e.g., research institutions), secure computation techniques, and an append-only ledger which together enable auditing of scientific claims by 3rd parties and matches real world trust assumptions. We implement and evaluate a construction of STAR using the Microsoft SEAL encryption library and SPDZ multi-party computation protocol. Our experimental evaluation demonstrates the practicality of STAR in multiple real world scenarios as a system for certifying scientific discoveries in a tamper-proof way
    • …
    corecore