81,641 research outputs found

    Model selection in High-Dimensions: A Quadratic-risk based approach

    Full text link
    In this article we propose a general class of risk measures which can be used for data based evaluation of parametric models. The loss function is defined as generalized quadratic distance between the true density and the proposed model. These distances are characterized by a simple quadratic form structure that is adaptable through the choice of a nonnegative definite kernel and a bandwidth parameter. Using asymptotic results for the quadratic distances we build a quick-to-compute approximation for the risk function. Its derivation is analogous to the Akaike Information Criterion (AIC), but unlike AIC, the quadratic risk is a global comparison tool. The method does not require resampling, a great advantage when point estimators are expensive to compute. The method is illustrated using the problem of selecting the number of components in a mixture model, where it is shown that, by using an appropriate kernel, the method is computationally straightforward in arbitrarily high data dimensions. In this same context it is shown that the method has some clear advantages over AIC and BIC.Comment: Updated with reviewer suggestion

    Synchronization and Noise: A Mechanism for Regularization in Neural Systems

    Full text link
    To learn and reason in the presence of uncertainty, the brain must be capable of imposing some form of regularization. Here we suggest, through theoretical and computational arguments, that the combination of noise with synchronization provides a plausible mechanism for regularization in the nervous system. The functional role of regularization is considered in a general context in which coupled computational systems receive inputs corrupted by correlated noise. Noise on the inputs is shown to impose regularization, and when synchronization upstream induces time-varying correlations across noise variables, the degree of regularization can be calibrated over time. The proposed mechanism is explored first in the context of a simple associative learning problem, and then in the context of a hierarchical sensory coding task. The resulting qualitative behavior coincides with experimental data from visual cortex.Comment: 32 pages, 7 figures. under revie

    Subsampling Algorithms for Semidefinite Programming

    Full text link
    We derive a stochastic gradient algorithm for semidefinite optimization using randomization techniques. The algorithm uses subsampling to reduce the computational cost of each iteration and the subsampling ratio explicitly controls granularity, i.e. the tradeoff between cost per iteration and total number of iterations. Furthermore, the total computational cost is directly proportional to the complexity (i.e. rank) of the solution. We study numerical performance on some large-scale problems arising in statistical learning.Comment: Final version, to appear in Stochastic System

    Optimal asymptotic cloning machines

    Full text link
    We pose the question whether the asymptotic equivalence between quantum cloning and quantum state estimation, valid at the single-clone level, still holds when all clones are examined globally. We conjecture that the answer is affirmative and present a large amount of evidence supporting our conjecture, developing techniques to derive optimal asymptotic cloners and proving their equivalence with estimation in virtually all scenarios considered in the literature. Our analysis covers the case of arbitrary finite sets of states, arbitrary families of coherent states, arbitrary phase- and multiphase-covariant sets of states, and two-qubit maximally entangled states. In all these examples we observe that the optimal asymptotic fidelity enjoys a universality property, as its scaling does not depend on the specific details of the set of input states, but only on the number of parameters needed to specify them.Comment: 27 + 9 pages, corrected one observation about cloning of maximally entangled state
    • …
    corecore