31,535 research outputs found

    Mixture of easy trials enables transient and sustained perceptual improvements through priming and perceptual learning.

    Get PDF
    The sense of vision allows us to discriminate fine details across a wide range of tasks. How to improve this perceptual skill, particularly within a short training session, is of substantial interest. Emerging evidence suggests that mixing easy trials can quickly improve performance in hard trials, but it is equivocal whether the improvement is short-lived or long-lasting, and additionally what accounts for this improvement. Here, by tracking objective performance (accuracy) and subjective experience (ratings of target visibility and choice confidence) over time and in a large sample of participants, we demonstrate the coexistence of transient and sustained effects of mixing easy trials, which differ markedly in their timescales, in their effects on subjective awareness, and in individual differences. In particular, whereas the transient effect was found to be ubiquitous and manifested similarly across objective and subjective measures, the sustained effect was limited to a subset of participants with weak convergence from objective and subjective measures. These results indicate that mixture of easy trials enables two distinct, co-existing forms of rapid perceptual improvements in hard trials, as mediated by robust priming and fragile learning. Placing constraints on theory of brain plasticity, this finding may also have implications for alleviating visual deficits

    Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks

    Get PDF
    Much of the focus in the area of knowledge distillation has been on distilling knowledge from a larger teacher network to a smaller student network. However, there has been little research on how the concept of distillation can be leveraged to distill the knowledge encapsulated in the training data itself into a reduced form. In this study, we explore the concept of progressive label distillation, where we leverage a series of teacher-student network pairs to progressively generate distilled training data for learning deep neural networks with greatly reduced input dimensions. To investigate the efficacy of the proposed progressive label distillation approach, we experimented with learning a deep limited vocabulary speech recognition network based on generated 500ms input utterances distilled progressively from 1000ms source training data, and demonstrated a significant increase in test accuracy of almost 78% compared to direct learning.Comment: 9 page

    Quantum measurement in two-dimensional conformal field theories: Application to quantum energy teleportation

    Get PDF
    We construct a set of quasi-local measurement operators in 2D CFT, and then use them to proceed the quantum energy teleportation (QET) protocol and show it is viable. These measurement operators are constructed out of the projectors constructed from shadow operators, but further acting on the product of two spatially separated primary fields. They are equivalently the OPE blocks in the large central charge limit up to some UV-cutoff dependent normalization but the associated probabilities of outcomes are UV-cutoff independent. We then adopt these quantum measurement operators to show that the QET protocol is viable in general. We also check the CHSH inequality a la OPE blocks.Comment: match the version published on PLB, the main conclusion didn't change, some techincal details can be found in the previous versio

    Basis Expansions for Functional Snippets

    Full text link
    Estimation of mean and covariance functions is fundamental for functional data analysis. While this topic has been studied extensively in the literature, a key assumption is that there are enough data in the domain of interest to estimate both the mean and covariance functions. In this paper, we investigate mean and covariance estimation for functional snippets in which observations from a subject are available only in an interval of length strictly (and often much) shorter than the length of the whole interval of interest. For such a sampling plan, no data is available for direct estimation of the off-diagonal region of the covariance function. We tackle this challenge via a basis representation of the covariance function. The proposed approach allows one to consistently estimate an infinite-rank covariance function from functional snippets. We establish the convergence rates for the proposed estimators and illustrate their finite-sample performance via simulation studies and two data applications.Comment: 51 pages, 10 figure
    corecore