13,528 research outputs found

    Deterministic Channel Design for Minimum Leakage

    Get PDF
    This work explores the problem of designing a channel that leaks the least amount of information while respecting a set of operational constraints. This paper focuses on deterministic channels and deterministic solutions. This setting is relevant because most programs and many channel design problems are naturally modelled by deterministic channels. Moreover, the setting is also relevant when considering an attacker who can observe many outputs of an arbitrary channel while the secret input stays the same: when the number of observations is arbitrarily large, the channel of minimal leakage is deterministic. The deterministic channel design problem has different solutions depending on which leakage measure is chosen. The problem is shown to be NP-hard in general. However, for a particular class of constraints, called k-complete hypergraph constraints, a greedy algorithm is shown to provide the optimal solution for a wide class of leakage measures

    Asymmetric Quantum Dialogue in Noisy Environment

    Full text link
    A notion of asymmetric quantum dialogue (AQD) is introduced. Conventional protocols of quantum dialogue are essentially symmetric as both the users (Alice and Bob) can encode the same amount of classical information. In contrast, the scheme for AQD introduced here provides different amount of communication powers to Alice and Bob. The proposed scheme, offers an architecture, where the entangled state and the encoding scheme to be shared between Alice and Bob depends on the amount of classical information they want to exchange with each other. The general structure for the AQD scheme has been obtained using a group theoretic structure of the operators introduced in (Shukla et al., Phys. Lett. A, 377 (2013) 518). The effect of different types of noises (e.g., amplitude damping and phase damping noise) on the proposed scheme is investigated, and it is shown that the proposed AQD is robust and uses optimized amount of quantum resources.Comment: 11 pages, 2 figure

    DR.SGX: Hardening SGX Enclaves against Cache Attacks with Data Location Randomization

    Full text link
    Recent research has demonstrated that Intel's SGX is vulnerable to various software-based side-channel attacks. In particular, attacks that monitor CPU caches shared between the victim enclave and untrusted software enable accurate leakage of secret enclave data. Known defenses assume developer assistance, require hardware changes, impose high overhead, or prevent only some of the known attacks. In this paper we propose data location randomization as a novel defensive approach to address the threat of side-channel attacks. Our main goal is to break the link between the cache observations by the privileged adversary and the actual data accesses by the victim. We design and implement a compiler-based tool called DR.SGX that instruments enclave code such that data locations are permuted at the granularity of cache lines. We realize the permutation with the CPU's cryptographic hardware-acceleration units providing secure randomization. To prevent correlation of repeated memory accesses we continuously re-randomize all enclave data during execution. Our solution effectively protects many (but not all) enclaves from cache attacks and provides a complementary enclave hardening technique that is especially useful against unpredictable information leakage

    Privacy Against Statistical Inference

    Full text link
    We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.Comment: Allerton 2012, 8 page
    • …
    corecore