651 research outputs found

    Open Quantum Symmetric Simple Exclusion Process

    Full text link
    We introduce and solve a model of fermions hopping between neighbouring sites on a line with random Brownian amplitudes and open boundary conditions driving the system out of equilibrium. The average dynamics reduces to that of the symmetric simple exclusion process. However, the full distribution encodes for a richer behaviour entailing fluctuating quantum coherences which survive in the steady limit. We determine exactly the system state steady distribution. We show that these out of equilibrium quantum fluctuations fulfil a large deviation principle and we present a method to recursively compute exactly the large deviation function. On the way, our approach gives a solution of the classical symmetric simple exclusion process using fermion technology. Our results open the route towards the extension of the macroscopic fluctuation theory to many body quantum systems.Comment: 5 pages + SM, 2 figure

    Estimating the Null and the Proportion of non-Null effects in Large-scale Multiple Comparisons

    Get PDF
    An important issue raised by Efron in the context of large-scale multiple comparisons is that in many applications the usual assumption that the null distribution is known is incorrect, and seemingly negligible differences in the null may result in large differences in subsequent studies. This suggests that a careful study of estimation of the null is indispensable. In this paper, we consider the problem of estimating a null normal distribution, and a closely related problem, estimation of the proportion of non-null effects. We develop an approach based on the empirical characteristic function and Fourier analysis. The estimators are shown to be uniformly consistent over a wide class of parameters. Numerical performance of the estimators is investigated using both simulated and real data. In particular, we apply our procedure to the analysis of breast cancer and HIV microarray data sets. The estimators perform favorably in comparison to existing methods.Comment: 42 pages, 6 figure

    Optimal rates of convergence for estimating the null density and proportion of nonnull effects in large-scale multiple testing

    Get PDF
    An important estimation problem that is closely related to large-scale multiple testing is that of estimating the null density and the proportion of nonnull effects. A few estimators have been introduced in the literature; however, several important problems, including the evaluation of the minimax rate of convergence and the construction of rate-optimal estimators, remain open. In this paper, we consider optimal estimation of the null density and the proportion of nonnull effects. Both minimax lower and upper bounds are derived. The lower bound is established by a two-point testing argument, where at the core is the novel construction of two least favorable marginal densities f1f_1 and f2f_2. The density f1f_1 is heavy tailed both in the spatial and frequency domains and f2f_2 is a perturbation of f1f_1 such that the characteristic functions associated with f1f_1 and f2f_2 match each other in low frequencies. The minimax upper bound is obtained by constructing estimators which rely on the empirical characteristic function and Fourier analysis. The estimator is shown to be minimax rate optimal. Compared to existing methods in the literature, the proposed procedure not only provides more precise estimates of the null density and the proportion of the nonnull effects, but also yields more accurate results when used inside some multiple testing procedures which aim at controlling the False Discovery Rate (FDR). The procedure is easy to implement and numerical results are given.Comment: Published in at http://dx.doi.org/10.1214/09-AOS696 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Equilibration of quantum cat states

    Full text link
    We study the equilibration properties of isolated ergodic quantum systems initially prepared in a cat state, i.e a macroscopic quantum superposition of states. Our main result consists in showing that, even though decoherence is at work in the mean, there exists a remnant of the initial quantum coherences visible in the strength of the fluctuations of the steady state. We back-up our analysis with numerical results obtained on the XXX spin chain with a random field along the z-axis in the ergodic regime and find good qualitative and quantitative agreement with the theory. We also present and discuss a framework where equilibrium quantities can be computed from general statistical ensembles without relying on microscopic details about the initial state, akin to the eigenstate thermalization hypothesis.Comment: 18 pages, 3 figure

    Estimation and confidence sets for sparse normal mixtures

    Get PDF
    For high dimensional statistical models, researchers have begun to focus on situations which can be described as having relatively few moderately large coefficients. Such situations lead to some very subtle statistical problems. In particular, Ingster and Donoho and Jin have considered a sparse normal means testing problem, in which they described the precise demarcation or detection boundary. Meinshausen and Rice have shown that it is even possible to estimate consistently the fraction of nonzero coordinates on a subset of the detectable region, but leave unanswered the question of exactly in which parts of the detectable region consistent estimation is possible. In the present paper we develop a new approach for estimating the fraction of nonzero means for problems where the nonzero means are moderately large. We show that the detection region described by Ingster and Donoho and Jin turns out to be the region where it is possible to consistently estimate the expected fraction of nonzero coordinates. This theory is developed further and minimax rates of convergence are derived. A procedure is constructed which attains the optimal rate of convergence in this setting. Furthermore, the procedure also provides an honest lower bound for confidence intervals while minimizing the expected length of such an interval. Simulations are used to enable comparison with the work of Meinshausen and Rice, where a procedure is given but where rates of convergence have not been discussed. Extensions to more general Gaussian mixture models are also given.Comment: Published in at http://dx.doi.org/10.1214/009053607000000334 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Solution to the Quantum Symmetric Simple Exclusion Process : the Continuous Case

    Full text link
    The Quantum Symmetric Simple Exclusion Process (Q-SSEP) is a model for quantum stochastic dynamics of fermions hopping along the edges of a graph with Brownian noisy amplitudes and driven out-of-equilibrium by injection-extraction processes at a few vertices. We present a solution for the invariant probability measure of the one dimensional Q-SSEP in the infinite size limit by constructing the steady correlation functions of the system density matrix and quantum expectation values. These correlation functions code for a rich structure of fluctuating quantum correlations and coherences. Although our construction does not rely on the standard techniques from the theory of integrable systems, it is based on a remarkable interplay between the permutation groups and polynomials. We incidentally point out a possible combinatorial interpretation of the Q-SSEP correlation functions via a surprising connexion with geometric combinatorics and the associahedron polytopes.Comment: 46 pages, 3 figure
    corecore