60 research outputs found
The Bethe Ansatz as a Quantum Circuit
The Bethe ansatz represents an analytical method enabling the exact solution
of numerous models in condensed matter physics and statistical mechanics. When
a global symmetry is present, the trial wavefunctions of the Bethe ansatz
consist of plane wave superpositions. Previously, it has been shown that the
Bethe ansatz can be recast as a deterministic quantum circuit. An analytical
derivation of the quantum gates that form the circuit was lacking however. Here
we present a comprehensive study of the transformation that brings the Bethe
ansatz into a quantum circuit, which leads us to determine the analytical
expression of the circuit gates. As a crucial step of the derivation, we
present a simple set of diagrammatic rules that define a novel Matrix Product
State network building Bethe wavefunctions. Remarkably, this provides a new
perspective on the equivalence between the coordinate and algebraic versions of
the Bethe ansatz
Resource frugal optimizer for quantum machine learning
Quantum-enhanced data science, also known as quantum machine learning (QML),
is of growing interest as an application of near-term quantum computers.
Variational QML algorithms have the potential to solve practical problems on
real hardware, particularly when involving quantum data. However, training
these algorithms can be challenging and calls for tailored optimization
procedures. Specifically, QML applications can require a large shot-count
overhead due to the large datasets involved. In this work, we advocate for
simultaneous random sampling over both the dataset as well as the measurement
operators that define the loss function. We consider a highly general loss
function that encompasses many QML applications, and we show how to construct
an unbiased estimator of its gradient. This allows us to propose a shot-frugal
gradient descent optimizer called Refoqus (REsource Frugal Optimizer for
QUantum Stochastic gradient descent). Our numerics indicate that Refoqus can
save several orders of magnitude in shot cost, even relative to optimizers that
sample over measurement operators alone.Comment: 22 pages, 6 figures - extra quantum autoencoder results adde
Unifying and benchmarking state-of-the-art quantum error mitigation techniques
Error mitigation is an essential component of achieving practical quantum
advantage in the near term, and a number of different approaches have been
proposed. In this work, we recognize that many state-of-the-art error
mitigation methods share a common feature: they are data-driven, employing
classical data obtained from runs of different quantum circuits. For example,
Zero-noise extrapolation (ZNE) uses variable noise data and Clifford-data
regression (CDR) uses data from near-Clifford circuits. We show that Virtual
Distillation (VD) can be viewed in a similar manner by considering classical
data produced from different numbers of state preparations. Observing this fact
allows us to unify these three methods under a general data-driven error
mitigation framework that we call UNIfied Technique for Error mitigation with
Data (UNITED). In certain situations, we find that our UNITED method can
outperform the individual methods (i.e., the whole is better than the
individual parts). Specifically, we employ a realistic noise model obtained
from a trapped ion quantum computer to benchmark UNITED, as well as
state-of-the-art methods, for problems with various numbers of qubits, circuit
depths and total numbers of shots. We find that different techniques are
optimal for different shot budgets. Namely, ZNE is the best performer for small
shot budgets (), while Clifford-based approaches are optimal for larger
shot budgets (), and for our largest considered shot budget
(), UNITED gives the most accurate correction. Hence, our work
represents a benchmarking of current error mitigation methods, and provides a
guide for the regimes when certain methods are most useful.Comment: 13 pages, 4 figure
The battle of clean and dirty qubits in the era of partial error correction
When error correction becomes possible it will be necessary to dedicate a
large number of physical qubits to each logical qubit. Error correction allows
for deeper circuits to be run, but each additional physical qubit can
potentially contribute an exponential increase in computational space, so there
is a trade-off between using qubits for error correction or using them as noisy
qubits. In this work we look at the effects of using noisy qubits in
conjunction with noiseless qubits (an idealized model for error-corrected
qubits), which we call the "clean and dirty" setup. We employ analytical models
and numerical simulations to characterize this setup. Numerically we show the
appearance of Noise-Induced Barren Plateaus (NIBPs), i.e., an exponential
concentration of observables caused by noise, in an Ising model Hamiltonian
variational ansatz circuit. We observe this even if only a single qubit is
noisy and given a deep enough circuit, suggesting that NIBPs cannot be fully
overcome simply by error-correcting a subset of the qubits. On the positive
side, we find that for every noiseless qubit in the circuit, there is an
exponential suppression in concentration of gradient observables, showing the
benefit of partial error correction. Finally, our analytical models corroborate
these findings by showing that observables concentrate with a scaling in the
exponent related to the ratio of dirty-to-total qubits.Comment: 27 pages, 15 figures, (v2) minor change
Inference-Based Quantum Sensing
In a standard Quantum Sensing (QS) task one aims at estimating an unknown
parameter , encoded into an -qubit probe state, via measurements of
the system. The success of this task hinges on the ability to correlate changes
in the parameter to changes in the system response (i.e.,
changes in the measurement outcomes). For simple cases the form of
is known, but the same cannot be said for realistic
scenarios, as no general closed-form expression exists. In this work we present
an inference-based scheme for QS. We show that, for a general class of unitary
families of encoding, can be fully characterized by only
measuring the system response at parameters. In turn, this allows us to
infer the value of an unknown parameter given the measured response, as well as
to determine the sensitivity of the sensing scheme, which characterizes its
overall performance. We show that inference error is, with high probability,
smaller than , if one measures the system response with a number of
shots that scales only as . Furthermore, the
framework presented can be broadly applied as it remains valid for arbitrary
probe states and measurement schemes, and, even holds in the presence of
quantum noise. We also discuss how to extend our results beyond unitary
families. Finally, to showcase our method we implement it for a QS task on real
quantum hardware, and in numerical simulations.Comment: 5+10 pages, 3+5 figure
Late-Stage Metastatic Melanoma Emerges through a Diversity of Evolutionary Pathways
UNLABELLED: Understanding the evolutionary pathways to metastasis and resistance to immune-checkpoint inhibitors (ICI) in melanoma is critical for improving outcomes. Here, we present the most comprehensive intrapatient metastatic melanoma dataset assembled to date as part of the Posthumous Evaluation of Advanced Cancer Environment (PEACE) research autopsy program, including 222 exome sequencing, 493 panel-sequenced, 161 RNA sequencing, and 22 single-cell whole-genome sequencing samples from 14 ICI-treated patients. We observed frequent whole-genome doubling and widespread loss of heterozygosity, often involving antigen-presentation machinery. We found KIT extrachromosomal DNA may have contributed to the lack of response to KIT inhibitors of a KIT-driven melanoma. At the lesion-level, MYC amplifications were enriched in ICI nonresponders. Single-cell sequencing revealed polyclonal seeding of metastases originating from clones with different ploidy in one patient. Finally, we observed that brain metastases that diverged early in molecular evolution emerge late in disease. Overall, our study illustrates the diverse evolutionary landscape of advanced melanoma. SIGNIFICANCE: Despite treatment advances, melanoma remains a deadly disease at stage IV. Through research autopsy and dense sampling of metastases combined with extensive multiomic profiling, our study elucidates the many mechanisms that melanomas use to evade treatment and the immune system, whether through mutations, widespread copy-number alterations, or extrachromosomal DNA. See related commentary by Shain, p. 1294. This article is highlighted in the In This Issue feature, p. 1275
Basic science232.âCertolizumab pegol prevents pro-inflammatory alterations in endothelial cell function
Background: Cardiovascular disease is a major comorbidity of rheumatoid arthritis (RA) and a leading cause of death. Chronic systemic inflammation involving tumour necrosis factor alpha (TNF) could contribute to endothelial activation and atherogenesis. A number of anti-TNF therapies are in current use for the treatment of RA, including certolizumab pegol (CZP), (Cimzia Âź; UCB, Belgium). Anti-TNF therapy has been associated with reduced clinical cardiovascular disease risk and ameliorated vascular function in RA patients. However, the specific effects of TNF inhibitors on endothelial cell function are largely unknown. Our aim was to investigate the mechanisms underpinning CZP effects on TNF-activated human endothelial cells. Methods: Human aortic endothelial cells (HAoECs) were cultured in vitro and exposed to a) TNF alone, b) TNF plus CZP, or c) neither agent. Microarray analysis was used to examine the transcriptional profile of cells treated for 6 hrs and quantitative polymerase chain reaction (qPCR) analysed gene expression at 1, 3, 6 and 24 hrs. NF-ÎșB localization and IÎșB degradation were investigated using immunocytochemistry, high content analysis and western blotting. Flow cytometry was conducted to detect microparticle release from HAoECs. Results: Transcriptional profiling revealed that while TNF alone had strong effects on endothelial gene expression, TNF and CZP in combination produced a global gene expression pattern similar to untreated control. The two most highly up-regulated genes in response to TNF treatment were adhesion molecules E-selectin and VCAM-1 (q 0.2 compared to control; p > 0.05 compared to TNF alone). The NF-ÎșB pathway was confirmed as a downstream target of TNF-induced HAoEC activation, via nuclear translocation of NF-ÎșB and degradation of IÎșB, effects which were abolished by treatment with CZP. In addition, flow cytometry detected an increased production of endothelial microparticles in TNF-activated HAoECs, which was prevented by treatment with CZP. Conclusions: We have found at a cellular level that a clinically available TNF inhibitor, CZP reduces the expression of adhesion molecule expression, and prevents TNF-induced activation of the NF-ÎșB pathway. Furthermore, CZP prevents the production of microparticles by activated endothelial cells. This could be central to the prevention of inflammatory environments underlying these conditions and measurement of microparticles has potential as a novel prognostic marker for future cardiovascular events in this patient group. Disclosure statement: Y.A. received a research grant from UCB. I.B. received a research grant from UCB. S.H. received a research grant from UCB. All other authors have declared no conflicts of interes
Search for dark matter produced in association with bottom or top quarks in âs = 13 TeV pp collisions with the ATLAS detector
A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fbâ1 of protonâproton collision data recorded by the ATLAS experiment at âs = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements
Correction to: Cluster identification, selection, and description in Cluster randomized crossover trials: the PREP-IT trials
An amendment to this paper has been published and can be accessed via the original article
- âŠ