4,619 research outputs found

    Optimal sequential fingerprinting: Wald vs. Tardos

    Full text link
    We study sequential collusion-resistant fingerprinting, where the fingerprinting code is generated in advance but accusations may be made between rounds, and show that in this setting both the dynamic Tardos scheme and schemes building upon Wald's sequential probability ratio test (SPRT) are asymptotically optimal. We further compare these two approaches to sequential fingerprinting, highlighting differences between the two schemes. Based on these differences, we argue that Wald's scheme should in general be preferred over the dynamic Tardos scheme, even though both schemes have their merits. As a side result, we derive an optimal sequential group testing method for the classical model, which can easily be generalized to different group testing models.Comment: 12 pages, 10 figure

    The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction

    Get PDF
    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex

    Algebraic and algorithmic frameworks for optimized quantum measurements

    Get PDF
    Von Neumann projections are the main operations by which information can be extracted from the quantum to the classical realm. They are however static processes that do not adapt to the states they measure. Advances in the field of adaptive measurement have shown that this limitation can be overcome by "wrapping" the von Neumann projectors in a higher-dimensional circuit which exploits the interplay between measurement outcomes and measurement settings. Unfortunately, the design of adaptive measurement has often been ad hoc and setup-specific. We shall here develop a unified framework for designing optimized measurements. Our approach is two-fold: The first is algebraic and formulates the problem of measurement as a simple matrix diagonalization problem. The second is algorithmic and models the optimal interaction between measurement outcomes and measurement settings as a cascaded network of conditional probabilities. Finally, we demonstrate that several figures of merit, such as Bell factors, can be improved by optimized measurements. This leads us to the promising observation that measurement detectors which---taken individually---have a low quantum efficiency can be be arranged into circuits where, collectively, the limitations of inefficiency are compensated for
    • …
    corecore