258 research outputs found
Propagation and smoothing of shocks in alternative social security systems
Even with well-developed capital markets, there is no private market mechanism for trading between current and future generations. This generates a potential role for public old-age pension systems to spread economic and demographic shocks among different generations. This paper evaluates how different systems smooth and propagate shocks to productivity, fertility, mortality and migration in a realistic OLG model. We use reductions in the variance of wealth equivalents to measure performance, starting with the existing U.S. system as a unifying framework, in which we vary how much taxes and benefits adjust, and which we then compare to the existing German and Swedish systems. We find that system design and shock type are key factors. The German system and the benefit-adjustment-only U.S. system best smooth productivity shocks, which are by far the most important shocks. Overall, the German system performs best, while the Swedish system, which includes a buffer stock to relax annual budget constraints, performs rather poorly. Focusing on the U.S. system, reliance solely on tax adjustment fares best for mortality and migration shocks, while equal reliance on tax and benefit adjustments is best for fertility shocks
Near-optimal quantum tomography: Estimators and bounds
© 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. We give bounds on the average fidelity achievable by any quantum state estimator, which is arguably the most prominently used figure of merit in quantum state tomography. Moreover, these bounds can be computed online - that is, while the experiment is running. We show numerically that these bounds are quite tight for relevant distributions of density matrices. We also show that the Bayesian mean estimator is ideal in the sense of performing close to the bound without requiring optimization. Our results hold for all finite dimensional quantum systems
Recovering Quantum Gates from Few Average Gate Fidelities
Characterizing quantum processes is a key task in the development of quantum technologies, especially at the noisy intermediate scale of today’s devices. One method for characterizing processes is randomized benchmarking, which is robust against state preparation and measurement errors and can be used to benchmark Clifford gates. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. In this Letter, we show that the favorable features of both approaches can be combined. For characterizing multiqubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured with respect to random Clifford unitaries. Moreover, for general unital quantum channels, we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity—a figure of merit characterizing the coherence of a process
Investigation of Different Library Preparation and Tissue of Origin Deconvolution Methods for Urine and Plasma cfDNA Methylome Analysis.
Methylation sequencing is a promising approach to infer the tissue of origin of cell-free DNA (cfDNA). In this study, a single- and a double-stranded library preparation approach were evaluated with respect to their technical biases when applied on cfDNA from plasma and urine. Additionally, tissue of origin (TOO) proportions were evaluated using two deconvolution methods. Sequencing cfDNA from urine using the double-stranded method resulted in a substantial within-read methylation bias and a lower global methylation (56.0% vs. 75.8%, p ≤ 0.0001) compared to plasma cfDNA, both of which were not observed with the single-stranded approach. Individual CpG site-based TOO deconvolution resulted in a significantly increased proportion of undetermined TOO with the double-stranded method (urine: 32.3% vs. 1.9%; plasma: 5.9% vs. 0.04%; p ≤ 0.0001), but no major differences in proportions of individual cell types. In contrast, fragment-level deconvolution led to multiple cell types, with significantly different TOO proportions between the two methods. This study thus outlines potential limitations of double-stranded library preparation for methylation analysis of cfDNA especially for urinary cfDNA. While the double-stranded method allows jagged end analysis in addition to TOO analysis, it leads to significant methylation bias in urinary cfDNA, which single-stranded methods can overcome
Comparison of methods for donor-derived cell-free DNA quantification in plasma and urine from solid organ transplant recipients.
In allograft monitoring of solid organ transplant recipients, liquid biopsy has emerged as a novel approach using quantification of donor-derived cell-free DNA (dd-cfDNA) in plasma. Despite early clinical implementation and analytical validation of techniques, direct comparisons of dd-cfDNA quantification methods are lacking. Furthermore, data on dd-cfDNA in urine is scarce and high-throughput sequencing-based methods so far have not leveraged unique molecular identifiers (UMIs) for absolute dd-cfDNA quantification. Different dd-cfDNA quantification approaches were compared in urine and plasma of kidney and liver recipients: A) Droplet digital PCR (ddPCR) using allele-specific detection of seven common HLA-DRB1 alleles and the Y chromosome; B) high-throughput sequencing (HTS) using a custom QIAseq DNA panel targeting 121 common polymorphisms; and C) a commercial dd-cfDNA quantification method (AlloSeq® cfDNA, CareDx). Dd-cfDNA was quantified as %dd-cfDNA, and for ddPCR and HTS using UMIs additionally as donor copies. In addition, relative and absolute dd-cfDNA levels in urine and plasma were compared in clinically stable recipients. The HTS method presented here showed a strong correlation of the %dd-cfDNA with ddPCR (R 2 = 0.98) and AlloSeq® cfDNA (R 2 = 0.99) displaying only minimal to no proportional bias. Absolute dd-cfDNA copies also correlated strongly (τ = 0.78) between HTS with UMI and ddPCR albeit with substantial proportional bias (slope: 0.25; 95%-CI: 0.19-0.26). Among 30 stable kidney transplant recipients, the median %dd-cfDNA in urine was 39.5% (interquartile range, IQR: 21.8-58.5%) with 36.6 copies/μmol urinary creatinine (IQR: 18.4-109) and 0.19% (IQR: 0.01-0.43%) with 5.0 copies/ml (IQR: 1.8-12.9) in plasma without any correlation between body fluids. The median %dd-cfDNA in plasma from eight stable liver recipients was 2.2% (IQR: 0.72-4.1%) with 120 copies/ml (IQR: 85.0-138) while the median dd-cfDNA copies/ml was below 0.1 in urine. This first head-to-head comparison of methods for absolute and relative quantification of dd-cfDNA in urine and plasma supports a method-independent %dd-cfDNA cutoff and indicates the suitability of the presented HTS method for absolute dd-cfDNA quantification using UMIs. To evaluate the utility of dd-cfDNA in urine for allograft surveillance, absolute levels instead of relative amounts will most likely be required given the extensive variability of %dd-cfDNA in stable kidney recipients
Randomizing multi product formulas for Hamiltonian simulation
Quantum simulation, the simulation of quantum processes on quantum computers, suggests a path forward for the efficient simulation of problems in condensed matter physics, quantum chemistry, and materials science. While the majority of quantum simulation algorithms are deterministic, a recent surge of ideas has shown that randomization can greatly benefit algorithmic performance. In this work, we introduce a scheme for quantum simulation that unites the advantages of randomized compiling on the one hand and higher order multi product formulas, as they are used for example in linear combination of unitaries LCU algorithms or quantum error mitigation, on the other hand. In doing so, we propose a framework of randomized sampling that is expected to be useful for programmable quantum simulators and present two new multi product formula algorithms tailored to it. Our framework reduces the circuit depth by circumventing the need for oblivious amplitude amplification required by the implementation of multi product formulas using standard LCU methods, rendering it especially useful for early quantum computers used to estimate the dynamics of quantum systems instead of performing full fledged quantum phase estimation. Our algorithms achieve a simulation error that shrinks exponentially with the circuit depth. To corroborate their functioning, we prove rigorous performance bounds as well as the concentration of the randomized sampling procedure. We demonstrate the functioning of the approach for several physically meaningful examples of Hamiltonians, including fermionic systems and the Sachdev Ye Kitaev model, for which the method provides a favorable scaling in the effor
Quantum advantage in learning from experiments
Quantum technology has the potential to revolutionize how we acquire and
process experimental data to learn about the physical world. An experimental
setup that transduces data from a physical system to a stable quantum memory,
and processes that data using a quantum computer, could have significant
advantages over conventional experiments in which the physical system is
measured and the outcomes are processed using a classical computer. We prove
that, in various tasks, quantum machines can learn from exponentially fewer
experiments than those required in conventional experiments. The exponential
advantage holds in predicting properties of physical systems, performing
quantum principal component analysis on noisy states, and learning approximate
models of physical dynamics. In some tasks, the quantum processing needed to
achieve the exponential advantage can be modest; for example, one can
simultaneously learn about many noncommuting observables by processing only two
copies of the system. Conducting experiments with up to 40 superconducting
qubits and 1300 quantum gates, we demonstrate that a substantial quantum
advantage can be realized using today's relatively noisy quantum processors.
Our results highlight how quantum technology can enable powerful new strategies
to learn about nature.Comment: 6 pages, 17 figures + 46 page appendix; open-source code available at
https://github.com/quantumlib/ReCirq/tree/master/recirq/qml_lf
Comparison of methods for donor-derived cell-free DNA quantification in plasma and urine from solid organ transplant recipients
In allograft monitoring of solid organ transplant recipients, liquid biopsy has emerged as a novel approach using quantification of donor-derived cell-free DNA (dd-cfDNA) in plasma. Despite early clinical implementation and analytical validation of techniques, direct comparisons of dd-cfDNA quantification methods are lacking. Furthermore, data on dd-cfDNA in urine is scarce and high-throughput sequencing-based methods so far have not leveraged unique molecular identifiers (UMIs) for absolute dd-cfDNA quantification. Different dd-cfDNA quantification approaches were compared in urine and plasma of kidney and liver recipients: A) Droplet digital PCR (ddPCR) using allele-specific detection of seven common HLA-DRB1 alleles and the Y chromosome; B) high-throughput sequencing (HTS) using a custom QIAseq DNA panel targeting 121 common polymorphisms; and C) a commercial dd-cfDNA quantification method (AlloSeq® cfDNA, CareDx). Dd-cfDNA was quantified as %dd-cfDNA, and for ddPCR and HTS using UMIs additionally as donor copies. In addition, relative and absolute dd-cfDNA levels in urine and plasma were compared in clinically stable recipients. The HTS method presented here showed a strong correlation of the %dd-cfDNA with ddPCR (R2 = 0.98) and AlloSeq® cfDNA (R2 = 0.99) displaying only minimal to no proportional bias. Absolute dd-cfDNA copies also correlated strongly (τ = 0.78) between HTS with UMI and ddPCR albeit with substantial proportional bias (slope: 0.25; 95%-CI: 0.19–0.26). Among 30 stable kidney transplant recipients, the median %dd-cfDNA in urine was 39.5% (interquartile range, IQR: 21.8–58.5%) with 36.6 copies/μmol urinary creatinine (IQR: 18.4–109) and 0.19% (IQR: 0.01–0.43%) with 5.0 copies/ml (IQR: 1.8–12.9) in plasma without any correlation between body fluids. The median %dd-cfDNA in plasma from eight stable liver recipients was 2.2% (IQR: 0.72–4.1%) with 120 copies/ml (IQR: 85.0–138) while the median dd-cfDNA copies/ml was below 0.1 in urine. This first head-to-head comparison of methods for absolute and relative quantification of dd-cfDNA in urine and plasma supports a method-independent %dd-cfDNA cutoff and indicates the suitability of the presented HTS method for absolute dd-cfDNA quantification using UMIs. To evaluate the utility of dd-cfDNA in urine for allograft surveillance, absolute levels instead of relative amounts will most likely be required given the extensive variability of %dd-cfDNA in stable kidney recipients
Bevacizumab continuation versus no continuation after first-line chemotherapy plus bevacizumab in patients with metastatic colorectal cancer: a randomized phase III non-inferiority trial (SAKK 41/06)
In this trial, stopping bevacizumab after completion of induction chemotherapy was associated with a shorter time to progression, but no statistically significant difference in overall survival compared with the bevacizumab continuation strategy. Non-inferiority could not be demonstrated. Treatment costs are substantially higher for continuous bevacizumab treatmen
What Shall I Do Next? Intention Mining for Flexible Process Enactment
International audienceBesides the benefits of flexible processes, practical implementations of process aware information systems have also revealed difficulties encountered by process participants during enactment. Several support and guidance solutions based on process mining have been proposed, but they lack a suitable semantics for human reasoning and decisions making as they mainly rely on low level activities. Applying design science, we created FlexPAISSeer, an intention mining oriented approach, with its component artifacts: 1) IntentMiner which discovers the intentional model of the executable process in an unsupervised manner; 2) In-tentRecommender which generates recommendations as intentions and confidence factors, based on the mined intentional process model and probabilistic calculus. The artifacts were evaluated in a case study with a Netherlands software company, using a Childcare system that allows flexible data-driven process enactment
- …