60,312 research outputs found

    One-shot rates for entanglement manipulation under non-entangling maps

    Get PDF
    We obtain expressions for the optimal rates of one- shot entanglement manipulation under operations which generate a negligible amount of entanglement. As the optimal rates for entanglement distillation and dilution in this paradigm, we obtain the max- and min-relative entropies of entanglement, the two logarithmic robustnesses of entanglement, and smoothed versions thereof. This gives a new operational meaning to these entanglement measures. Moreover, by considering the limit of many identical copies of the shared entangled state, we partially recover the recently found reversibility of entanglement manipu- lation under the class of operations which asymptotically do not generate entanglement.Comment: 7 pages; no figure

    Convergence of Smoothed Empirical Measures with Applications to Entropy Estimation

    Full text link
    This paper studies convergence of empirical measures smoothed by a Gaussian kernel. Specifically, consider approximating Pβˆ—NΟƒP\ast\mathcal{N}_\sigma, for NΟƒβ‰œN(0,Οƒ2Id)\mathcal{N}_\sigma\triangleq\mathcal{N}(0,\sigma^2 \mathrm{I}_d), by P^nβˆ—NΟƒ\hat{P}_n\ast\mathcal{N}_\sigma, where P^n\hat{P}_n is the empirical measure, under different statistical distances. The convergence is examined in terms of the Wasserstein distance, total variation (TV), Kullback-Leibler (KL) divergence, and Ο‡2\chi^2-divergence. We show that the approximation error under the TV distance and 1-Wasserstein distance (W1\mathsf{W}_1) converges at rate eO(d)nβˆ’12e^{O(d)}n^{-\frac{1}{2}} in remarkable contrast to a typical nβˆ’1dn^{-\frac{1}{d}} rate for unsmoothed W1\mathsf{W}_1 (and dβ‰₯3d\ge 3). For the KL divergence, squared 2-Wasserstein distance (W22\mathsf{W}_2^2), and Ο‡2\chi^2-divergence, the convergence rate is eO(d)nβˆ’1e^{O(d)}n^{-1}, but only if PP achieves finite input-output Ο‡2\chi^2 mutual information across the additive white Gaussian noise channel. If the latter condition is not met, the rate changes to Ο‰(nβˆ’1)\omega(n^{-1}) for the KL divergence and W22\mathsf{W}_2^2, while the Ο‡2\chi^2-divergence becomes infinite - a curious dichotomy. As a main application we consider estimating the differential entropy h(Pβˆ—NΟƒ)h(P\ast\mathcal{N}_\sigma) in the high-dimensional regime. The distribution PP is unknown but nn i.i.d samples from it are available. We first show that any good estimator of h(Pβˆ—NΟƒ)h(P\ast\mathcal{N}_\sigma) must have sample complexity that is exponential in dd. Using the empirical approximation results we then show that the absolute-error risk of the plug-in estimator converges at the parametric rate eO(d)nβˆ’12e^{O(d)}n^{-\frac{1}{2}}, thus establishing the minimax rate-optimality of the plug-in. Numerical results that demonstrate a significant empirical superiority of the plug-in approach to general-purpose differential entropy estimators are provided.Comment: arXiv admin note: substantial text overlap with arXiv:1810.1158

    Smoothing and filtering with a class of outer measures

    Full text link
    Filtering and smoothing with a generalised representation of uncertainty is considered. Here, uncertainty is represented using a class of outer measures. It is shown how this representation of uncertainty can be propagated using outer-measure-type versions of Markov kernels and generalised Bayesian-like update equations. This leads to a system of generalised smoothing and filtering equations where integrals are replaced by supremums and probability density functions are replaced by positive functions with supremum equal to one. Interestingly, these equations retain most of the structure found in the classical Bayesian filtering framework. It is additionally shown that the Kalman filter recursion can be recovered from weaker assumptions on the available information on the corresponding hidden Markov model

    Regularized brain reading with shrinkage and smoothing

    Full text link
    Functional neuroimaging measures how the brain responds to complex stimuli. However, sample sizes are modest, noise is substantial, and stimuli are high dimensional. Hence, direct estimates are inherently imprecise and call for regularization. We compare a suite of approaches which regularize via shrinkage: ridge regression, the elastic net (a generalization of ridge regression and the lasso), and a hierarchical Bayesian model based on small area estimation (SAE). We contrast regularization with spatial smoothing and combinations of smoothing and shrinkage. All methods are tested on functional magnetic resonance imaging (fMRI) data from multiple subjects participating in two different experiments related to reading, for both predicting neural response to stimuli and decoding stimuli from responses. Interestingly, when the regularization parameters are chosen by cross-validation independently for every voxel, low/high regularization is chosen in voxels where the classification accuracy is high/low, indicating that the regularization intensity is a good tool for identification of relevant voxels for the cognitive task. Surprisingly, all the regularization methods work about equally well, suggesting that beating basic smoothing and shrinkage will take not only clever methods, but also careful modeling.Comment: Published at http://dx.doi.org/10.1214/15-AOAS837 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore