3,281,947 research outputs found

    Automatic variance control and variance estimation loops

    Get PDF
    A closed loop servo approach is applied to the problem of controlling and estimating variance in nonstationary signals. The new circuit closely resembles but is not the same as, automatic gain control (AGC) which is common in radio and other circuits. The closed loop nature of the solution to this problem makes this approach highly accurate and can be used recursively in real time

    Experimental Design Modulates Variance in BOLD Activation: The Variance Design General Linear Model

    Full text link
    Typical fMRI studies have focused on either the mean trend in the blood-oxygen-level-dependent (BOLD) time course or functional connectivity (FC). However, other statistics of the neuroimaging data may contain important information. Despite studies showing links between the variance in the BOLD time series (BV) and age and cognitive performance, a formal framework for testing these effects has not yet been developed. We introduce the Variance Design General Linear Model (VDGLM), a novel framework that facilitates the detection of variance effects. We designed the framework for general use in any fMRI study by modeling both mean and variance in BOLD activation as a function of experimental design. The flexibility of this approach allows the VDGLM to i) simultaneously make inferences about a mean or variance effect while controlling for the other and ii) test for variance effects that could be associated with multiple conditions and/or noise regressors. We demonstrate the use of the VDGLM in a working memory application and show that engagement in a working memory task is associated with whole-brain decreases in BOLD variance.Comment: 18 pages, 7 figure

    Variance

    Get PDF
    Variance continues my long-standing interest in the Victorian polymath Francis Galton, having first researched Galton as part of my (2000) publication Death’s Witness and associated MPhil. This research formed the basis of my (2005) artist’s film Vanitas: Seed-Head, based on Galton’s composite photographic portraits and his proto-genetic inheritance studies. Variance is also influenced by Galton’s studies of inheritance (all six photographs are of my extended family) but extends this research into an exploration of his pioneering work on statistics and biometrics. According to Elizabeth Edwards (1997) Francis Galton’s composite photographs constituted “…lived concepts – embodied or concrete ideas to render the unseen or non-existent empirically: in other words, a taxonomic essence within a dialectic of the visible and invisible.” Variance plays on this tension between the seen and unseen, the known and unknown, to comment on the impossibility of ever being able to construct human typologies in the way Galton attempted. Variance incorporates scanning electron microscopy images of brain activity to create a series of ‘thought portraits’, which bring into question contemporary neuro-biological imaging technologies and interpretations, which allegedly allow neuroscientists to ‘see’ and ‘measure’ our thoughts and emotions. Variance raises awareness of the hidden nuances of scientific interpretation and meaning that lurk just below the surface of the posited reality of neuroscience

    Invariances in variance estimates

    Full text link
    We provide variants and improvements of the Brascamp-Lieb variance inequality which take into account the invariance properties of the underlying measure. This is applied to spectral gap estimates for log-concave measures with many symmetries and to non-interacting conservative spin systems

    The Parabolic variance (PVAR), a wavelet variance based on least-square fit

    Get PDF
    This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω\Omega frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and MVAR. PVAR is good for long-term analysis because the wavelet spans over 2τ2 \tau, the same of the AVAR wavelet; and good for short-term analysis because the response to white and flicker PM is 1/τ31/\tau^3 and 1/τ21/\tau^2, same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition - or corner - where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift

    Getting Around Cosmic Variance

    Get PDF
    Cosmic microwave background (CMB) anisotropies probe the primordial density field at the edge of the observable Universe. There is a limiting precision (``cosmic variance'') with which anisotropies can determine the amplitude of primordial mass fluctuations. This arises because the surface of last scatter (SLS) probes only a finite two-dimensional slice of the Universe. Probing other SLSs observed from different locations in the Universe would reduce the cosmic variance. In particular, the polarization of CMB photons scattered by the electron gas in a cluster of galaxies provides a measurement of the CMB quadrupole moment seen by the cluster. Therefore, CMB polarization measurements toward many clusters would probe the anisotropy on a variety of SLSs within the observable Universe, and hence reduce the cosmic-variance uncertainty.Comment: 6 pages, RevTeX, with two postscript figure

    Reducing Reparameterization Gradient Variance

    Full text link
    Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the "reparameterization trick," represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to use more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on non-conjugate multi-level hierarchical models and a Bayesian neural net where we observed gradient variance reductions of multiple orders of magnitude (20-2,000x)
    corecore