115 research outputs found

    Adaptive estimation of the density matrix in quantum homodyne tomography with noisy data

    Full text link
    In the framework of noisy quantum homodyne tomography with efficiency parameter 1/2<η≀11/2 < \eta \leq 1, we propose a novel estimator of a quantum state whose density matrix elements ρm,n\rho_{m,n} decrease like Ce−B(m+n)r/2Ce^{-B(m+n)^{r/ 2}}, for fixed C≄1C\geq 1, B>0B>0 and 0<r≀20<r\leq 2. On the contrary to previous works, we focus on the case where rr, CC and BB are unknown. The procedure estimates the matrix coefficients by a projection method on the pattern functions, and then by soft-thresholding the estimated coefficients. We prove that under the L2\mathbb{L}_2 -loss our procedure is adaptive rate-optimal, in the sense that it achieves the same rate of conversgence as the best possible procedure relying on the knowledge of (r,B,C)(r,B,C). Finite sample behaviour of our adaptive procedure are explored through numerical experiments

    Rank penalized estimation of a quantum system

    Full text link
    We introduce a new method to reconstruct the density matrix ρ\rho of a system of nn-qubits and estimate its rank dd from data obtained by quantum state tomography measurements repeated mm times. The procedure consists in minimizing the risk of a linear estimator ρ^\hat{\rho} of ρ\rho penalized by given rank (from 1 to 2n2^n), where ρ^\hat{\rho} is previously obtained by the moment method. We obtain simultaneously an estimator of the rank and the resulting density matrix associated to this rank. We establish an upper bound for the error of penalized estimator, evaluated with the Frobenius norm, which is of order dn(4/3)n/mdn(4/3)^n /m and consistency for the estimator of the rank. The proposed methodology is computationaly efficient and is illustrated with some example states and real experimental data sets

    Time series prediction via aggregation : an oracle bound including numerical cost

    Full text link
    We address the problem of forecasting a time series meeting the Causal Bernoulli Shift model, using a parametric set of predictors. The aggregation technique provides a predictor with well established and quite satisfying theoretical properties expressed by an oracle inequality for the prediction risk. The numerical computation of the aggregated predictor usually relies on a Markov chain Monte Carlo method whose convergence should be evaluated. In particular, it is crucial to bound the number of simulations needed to achieve a numerical precision of the same order as the prediction risk. In this direction we present a fairly general result which can be seen as an oracle inequality including the numerical cost of the predictor computation. The numerical cost appears by letting the oracle inequality depend on the number of simulations required in the Monte Carlo approximation. Some numerical experiments are then carried out to support our findings

    Noisy Monte Carlo: Convergence of Markov chains with approximate transition kernels

    Get PDF
    Monte Carlo algorithms often aim to draw from a distribution π\pi by simulating a Markov chain with transition kernel PP such that π\pi is invariant under PP. However, there are many situations for which it is impractical or impossible to draw from the transition kernel PP. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace PP by an approximation P^\hat{P}. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how 'close' the chain given by the transition kernel P^\hat{P} is to the chain given by PP. We apply these results to several examples from spatial statistics and network analysis.Comment: This version: results extended to non-uniformly ergodic Markov chain

    Rank-based model selection for multiple ions quantum tomography

    Get PDF
    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the "sparsity" properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods -- the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) -- to models consising of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of 4 ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a 4 ions experiment aimed at creating a Smolin state of rank 4. The two methods indicate that the optimal model for describing the data lies between ranks 6 and 9, and the Pearson χ2\chi^{2} test is applied to validate this conclusion. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements.Comment: 24 pages, 6 figures, 3 table

    Enhanced Hypothalamic Glucose Sensing in Obesity: Alteration of Redox Signaling

    Get PDF
    1939-327X (Electronic) Journal articleObjective : Recent data demonstrate that glucose sensing in different tissues is initiated by an intracellular redox-signaling pathway in physiological conditions. However, the relevance of such a mechanism in metabolic disease is not known. The aim of the present study was to determine whether brain-glucose hypersensitivity present in obese Zucker rat is related to an alteration in redox signaling. Research design and Methods: Brain glucose sensing alteration was investigated in vivo through the evaluation of electrical activity in arcuate nucleus, changes in ROS levels, and hypothalamic glucose-induced insulin secretion. In basal conditions, modifications of redox state and mitochondrial function were assessed through oxidized glutathione, glutathione peroxidase, manganese superoxide dismutase, aconitase activities and mitochondrial respiration. Results : Hypothalamic hypersensitivity to glucose was characterized by enhanced electrical activity of the arcuate nucleus and increased insulin secretion at a low glucose concentration, which does not produce such an effect in normal rats. It was associated with 1) increased ROS levels in response to this low glucose load, 2) constitutive oxidized environment coupled with lower antioxidant enzyme activity at both the cellular and mitochondrial level, and 3) over-expression of several mitochondrial subunits of the respiratory chain coupled with a global dysfunction in mitochondrial activity. Moreover, pharmacological restoration of the glutathione hypothalamic redox state by reduced-glutathione infusion in the third ventricle fully reversed the cerebral hypersensitivity to glucose. Conclusions : Altogether, these data demonstrate that obese Zucker rats' impaired hypothalamic regulation in terms of glucose sensing is linked to an abnormal redox signaling, which originates from mitochondria dysfunction

    Revisiting clustering as matrix factorisation on the Stiefel manifold

    Get PDF
    International audienceThis paper studies clustering for possibly high dimensional data (e.g. images, time series, gene expression data, and many other settings), and rephrase it as low rank matrix estimation in the PAC-Bayesian framework. Our approach leverages the well known Burer-Monteiro factorisation strategy from large scale optimisation, in the context of low rank estimation. Moreover, our Burer-Monteiro factors are shown to lie on a Stiefel manifold. We propose a new generalized Bayesian estimator for this problem and prove novel prediction bounds for clustering. We also devise a componentwise Langevin sampler on the Stiefel manifold to compute this estimator
    • 

    corecore