19 research outputs found

    Quantum SDP Solvers: Large Speed-Ups, Optimality, and Applications to Quantum Learning

    Get PDF
    We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes\u27 principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest

    Quantum SDP Solvers: Large Speed-ups, Optimality, and Applications to Quantum Learning

    Get PDF
    We give two quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes access to an oracle to the matrices at unit cost. We show that it has run time Õ(s^2(√((mÏ”)^(−10)) + √((nÏ”)^(−12))), with Ï” the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms when m ≈ n. The second algorithm assumes a fully quantum input model in which the matrices are given as quantum states. We show that its run time is Õ (√m + poly(r))⋅poly(log m,log n,B,Ï”^(−1)), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only poly-logarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state ρ with rank at most r, we show we can find in time √m⋅poly(log m,log n,r,Ï”^(−1)) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as ρ on the m measurements, up to error Ï”. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians with a poly-logarithmic dependence on its dimension, which could be of independent interest

    Quantum SDP Solvers: Large Speed-ups, Optimality, and Applications to Quantum Learning

    Get PDF
    We give two quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes access to an oracle to the matrices at unit cost. We show that it has run time Õ(s^2(√((mÏ”)^(−10)) + √((nÏ”)^(−12))), with Ï” the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms when m ≈ n. The second algorithm assumes a fully quantum input model in which the matrices are given as quantum states. We show that its run time is Õ (√m + poly(r))⋅poly(log m,log n,B,Ï”^(−1)), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only poly-logarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state ρ with rank at most r, we show we can find in time √m⋅poly(log m,log n,r,Ï”^(−1)) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as ρ on the m measurements, up to error Ï”. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians with a poly-logarithmic dependence on its dimension, which could be of independent interest

    Potential of the International Monitoring System radionuclide network for inverse modelling

    No full text
    International audienceThe International Monitoring System (IMS) radionuclide network enforces the Comprehensive Nuclear-Test-Ban Treaty which bans nuclear explosions. We have evaluated the potential of the IMS radionuclide network for inverse modelling of the source, whereas it is usually assessed by its detection capability. To do so, we have chosen the degrees of freedom for the signal (DFS), a well established criterion in remote sensing, in order to assess the performance of an inverse modelling system. Using a recent multiscale data assimilation technique, we have computed optimal adaptive grids of the source parameter space by maximising the DFS. This optimisation takes into account the monitoring network, the meteorology over one year (2009) and the relationship between the source parameters and the observations derived from the FLEXPART Lagrangian transport model. Areas of the domain where the grid-cells of the optimal adaptive grid are large emphasise zones where the retrieval is more uncertain, whereas areas where the grid-cells are smaller and denser stress regions where more source variables can be resolved. The observability of the globe through inverse modelling is studied in strong, realistic and small model error cases. The strong error and realistic error cases yield heterogeneous adaptive grids, indicating that information does not propagate far from the monitoring stations, whereas in the small error case, the grid is much more homogeneous. In all cases, several specific continental regions remain poorly observed such as Africa as well as the tropics, because of the trade winds. The northern hemisphere is better observed through inverse modelling (more than 60% of the total DFS) mostly because it contains more IMS stations. This unbalance leads to a better performance of inverse modelling in the northern hemisphere winter. The methodology is also applied to the subnetwork composed of the stations of the IMS network which measure noble gases

    Loss of Ubiquitin Carboxy-Terminal Hydrolase L1 Impairs Long-Term Differentiation Competence and Metabolic Regulation in Murine Spermatogonial Stem Cells

    No full text
    Spermatogonia are stem and progenitor cells responsible for maintaining mammalian spermatogenesis. Preserving the balance between self-renewal of spermatogonial stem cells (SSCs) and differentiation is critical for spermatogenesis and fertility. Ubiquitin carboxy-terminal hydrolase-L1 (UCH-L1) is highly expressed in spermatogonia of many species; however, its functional role has not been identified. Here, we aimed to understand the role of UCH-L1 in murine spermatogonia using a Uch-l1−/− mouse model. We confirmed that UCH-L1 is expressed in undifferentiated and early-differentiating spermatogonia in the post-natal mammalian testis. The Uch-l1−/− mice showed reduced testis weight and progressive degeneration of seminiferous tubules. Single-cell transcriptome analysis detected a dysregulated metabolic profile in spermatogonia of Uch-l1−/− compared to wild-type mice. Furthermore, cultured Uch-l1−/− SSCs had decreased capacity in regenerating full spermatogenesis after transplantation in vivo and accelerated oxidative phosphorylation (OXPHOS) during maintenance in vitro. Together, these results indicate that the absence of UCH-L1 impacts the maintenance of SSC homeostasis and metabolism and impacts the differentiation competence. Metabolic perturbations associated with loss of UCH-L1 appear to underlie a reduced capacity for supporting spermatogenesis and fertility with age. This work is one step further in understanding the complex regulatory circuits underlying SSC function

    Bedside estimates of dead space using end-tidal CO2 are independently associated with mortality in ARDS

    Full text link
    Abstract Purpose In acute respiratory distress syndrome (ARDS), dead space fraction has been independently associated with mortality. We hypothesized that early measurement of the difference between arterial and end-tidal CO2 (arterial-ET difference), a surrogate for dead space fraction, would predict mortality in mechanically ventilated patients with ARDS. Methods We performed two separate exploratory analyses. We first used publicly available databases from the ALTA, EDEN, and OMEGA ARDS Network trials (N = 124) as a derivation cohort to test our hypothesis. We then performed a separate retrospective analysis of patients with ARDS using University of Chicago patients (N = 302) as a validation cohort. Results The ARDS Network derivation cohort demonstrated arterial-ET difference, vasopressor requirement, age, and APACHE III to be associated with mortality by univariable analysis. By multivariable analysis, only the arterial-ET difference remained significant (P = 0.047). In a separate analysis, the modified Enghoff equation ((PaCO2–PETCO2)/PaCO2) was used in place of the arterial-ET difference and did not alter the results. The University of Chicago cohort found arterial-ET difference, age, ventilator mode, vasopressor requirement, and APACHE II to be associated with mortality in a univariate analysis. By multivariable analysis, the arterial-ET difference continued to be predictive of mortality (P = 0.031). In the validation cohort, substitution of the arterial-ET difference for the modified Enghoff equation showed similar results. Conclusion Arterial to end-tidal CO2 (ETCO2) difference is an independent predictor of mortality in patients with ARDS.http://deepblue.lib.umich.edu/bitstream/2027.42/173926/1/13054_2021_Article_3751.pd
    corecore