15,212 research outputs found

    Modulus Computational Entropy

    Full text link
    The so-called {\em leakage-chain rule} is a very important tool used in many security proofs. It gives an upper bound on the entropy loss of a random variable XX in case the adversary who having already learned some random variables Z1,,ZZ_{1},\ldots,Z_{\ell} correlated with XX, obtains some further information Z+1Z_{\ell+1} about XX. Analogously to the information-theoretic case, one might expect that also for the \emph{computational} variants of entropy the loss depends only on the actual leakage, i.e. on Z+1Z_{\ell+1}. Surprisingly, Krenn et al.\ have shown recently that for the most commonly used definitions of computational entropy this holds only if the computational quality of the entropy deteriorates exponentially in (Z1,,Z)|(Z_{1},\ldots,Z_{\ell})|. This means that the current standard definitions of computational entropy do not allow to fully capture leakage that occurred "in the past", which severely limits the applicability of this notion. As a remedy for this problem we propose a slightly stronger definition of the computational entropy, which we call the \emph{modulus computational entropy}, and use it as a technical tool that allows us to prove a desired chain rule that depends only on the actual leakage and not on its history. Moreover, we show that the modulus computational entropy unifies other,sometimes seemingly unrelated, notions already studied in the literature in the context of information leakage and chain rules. Our results indicate that the modulus entropy is, up to now, the weakest restriction that guarantees that the chain rule for the computational entropy works. As an example of application we demonstrate a few interesting cases where our restricted definition is fulfilled and the chain rule holds.Comment: Accepted at ICTS 201

    Information Processing, Computation and Cognition

    Get PDF
    Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism/computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects
    corecore