47 research outputs found
Gradients and frequency profiles of quantum re-uploading models
Quantum re-uploading models have been extensively investigated as a form of
machine learning within the context of variational quantum algorithms. Their
trainability and expressivity are not yet fully understood and are critical to
their performance. In this work, we address trainability through the lens of
the magnitude of the gradients of the cost function. We prove bounds for the
differences between gradients of the better-studied data-less parameterized
quantum circuits and re-uploading models. We coin the concept of {\sl
absorption witness} to quantify such difference. For the expressivity, we prove
that quantum re-uploading models output functions with vanishing high-frequency
components and upper-bounded derivatives with respect to data. As a
consequence, such functions present limited sensitivity to fine details, which
protects against overfitting. We performed numerical experiments extending the
theoretical results to more relaxed and realistic conditions. Overall, future
designs of quantum re-uploading models will benefit from the strengthened
knowledge delivered by the uncovering of absorption witnesses and vanishing
high frequencies
Analyzing variational quantum landscapes with information content
The parameters of the quantum circuit in a variational quantum algorithm
induce a landscape that contains the relevant information regarding its
optimization hardness. In this work we investigate such landscapes through the
lens of information content, a measure of the variability between points in
parameter space. Our major contribution connects the information content to the
average norm of the gradient, for which we provide robust analytical bounds on
its estimators. This result holds for any (classical or quantum) variational
landscape. We validate the analytical understating by numerically studying the
scaling of the gradient in an instance of the barren plateau problem. In such
instance we are able to estimate the scaling pre-factors in the gradient. Our
work provides a new way to analyze variational quantum algorithms in a
data-driven fashion well-suited for near-term quantum computers.Comment: 8 pages + 6 pages appendix + 2 pages references, 5 figures, 6 tables.
Peer-reviewed version published in npj quantum informatio
Measuring the tangle of three-qubit states
We present a quantum circuit that transforms an unknown three-qubit state into its canonical form, up to relative phases, given many copies of the original state. The circuit is made of three single-qubit parametrized quantum gates, and the optimal values for the parameters are learned in a variational fashion. Once this transformation is achieved, direct measurement of outcome probabilities in the computational basis provides an estimate of the tangle, which quantifies genuine tripartite entanglement. We perform simulations on a set of random states under different noise conditions to asses the validity of the method
Reducing-effect of chloride for the dissolution of black copper
Oxidized black copper ores are known for their difficulty in dissolving their components of interest through conventional methods. This is due to its non-crystalline and amorphous structure. Among these minerals, copper pitch and copper wad are of great interest because of their considerable concentrations of copper and manganese. Currently, these minerals are not incorporated into the extraction circuits or left untreated, whether in stock, leach pads, or waste. For the recovery of its main elements of interest (Cu and Mn), it is necessary to use reducing agents that dissolve the present MnO2, while allowing the recovery of Cu. In this research, the results for the dissolution of Mn and Cu from a black copper mineral are exposed, evaluating the reducing e↵ect of NaCl for MnO2 through pre-treatment of agglomerate and curing, and subsequently leaching in standard condition with the use of a reducing agent (Fe2+). High concentrations of chloride in the agglomerate process and prolonged curing times would favor the reduction of MnO2, increasing the dissolution of Mn, while the addition of NaCl did not benefit Cu extractions. Under standard conditions, low Mn extractions were obtained, while in an acid-reducing medium, a significant dissolution of MnO2 was achieved, which supports the removal of Cu
Proteína C reactiva, marcador inflamatorio asociado con ANCA en tuberculosis pulmonar
Antecedentes: la proteína C reactiva es uno de los marcadores inflamatorios denominados “reactantes de fase aguda” que se produce en el hígado en respuesta a procesos infecciosos o inflamatorios. En los pacientes con tuberculosis se ha descrito la formación de anticuerpos anticitoplasma de neutrófi los (ANCA).
Objetivo: determinar la concentración de proteína C reactiva, evaluar su comportamiento como marcador de la respuesta inflamatoria y analizar su correlación con los ANCA en los pacientes con tuberculosis pulmonar, antes y después de iniciar el tratamiento antifímico. Pacientes: se eligieron pacientes con sospecha de tuberculosis pulmonar. Una vez confirmado el diagnóstico, se obtuvieron las muestras de suero para analizar los datos clínicos y de laboratorios. La determinación de ANCA se realizó con estuches comerciales de inmunofluorescencia y la de proteína C reactiva con ELISA, antes y después de iniciar el tratamiento antifímico. Resultados: se obtuvieron 50 muestras de suero de pacientes con tuberculosis pulmonar. En la primera (94%) y segunda obtención (90%) de los sueros se registró un valor de proteína C reactiva menor de 5 mg/L. El valor promedio de proteína C reactiva fue de 3.05 ± 8.27 mg/L en la primera muestra y de 4.49 ± 11.2 mg/L en la segunda (p = 0.46). Los pacientes positivos a ANCA tuvieron valores más altos de proteína C reactiva en su segunda muestra (p = 0.001).
Discusión: existe una asociación entre la proteína C reactiva y la producción de anticuerpos anticitoplasma de neutrófilos en un subgrupo de pacientes con tuberculosis pulmonar. Su significación es incierta, pero quizá desempeñan alguna función patogénica en la respuesta inflamatoria pulmonar
Quantum unary approach to option pricing
We present a quantum algorithm for European option pricing in finance, where
the key idea is to work in the unary representation of the asset value. The
algorithm needs novel circuitry and is divided in three parts: first, the
amplitude distribution corresponding to the asset value at maturity is
generated using a low depth circuit; second, the computation of the expected
return is computed with simple controlled gates; and third, standard Amplitude
Estimation is used to gain quantum advantage. On the positive side, unary
representation remarkably simplifies the structure and depth of the quantum
circuit. Amplitude distributions uses quantum superposition to bypass the role
of classical Monte Carlo simulation. The unary representation also provides a
post-selection consistency check that allows for a substantial mitigation in
the error of the computation. On the negative side, unary representation
requires linearly many qubits to represent a target probability distribution,
as compared to the logarithmic scaling of binary algorithms. We compare the
performance of both unary vs. binary option pricing algorithms using error
maps, and find that unary representation may bring a relevant advantage in
practice for near-term devices.Comment: 14 (main) + 10 (appendix) pages, 22 figures. Final peer-reviewed
version, published in PRA. All suggestions from the referees have been
considered. We thank the referees and the journal for all the wor
A RISC-V simulator and benchmark suite for designing and evaluating vector architectures
Vector architectures lack tools for research. Consider the gem5 simulator, which is possibly the leading platform for computer-system architecture research. Unfortunately, gem5 does not have an available distribution that includes a flexible and customizable vector architecture model. In consequence, researchers have to develop their own simulation platform to test their ideas, which consume much research time. However, once the base simulator platform is developed, another question is the following: Which applications should be tested to perform the experiments? The lack of Vectorized Benchmark Suites is another limitation. To face these problems, this work presents a set of tools for designing and evaluating vector architectures. First, the gem5 simulator was extended to support the execution of RISC-V Vector instructions by adding a parameterizable Vector Architecture model for designers to evaluate different approaches according to the target they pursue. Second, a novel Vectorized Benchmark Suite is presented: a collection composed of seven data-parallel applications from different domains that can be classified according to the modules that are stressed in the vector architecture. Finally, a study of the Vectorized Benchmark Suite executing on the gem5-based Vector Architecture model is highlighted. This suite is the first in its category that covers the different possible usage scenarios that may occur within different vector architecture designs such as embedded systems, mainly focused on short vectors, or High-Performance-Computing (HPC), usually designed for large vectors.This work is partially supported by CONACyT Mexico under Grant No. 472106 and the DRAC project, which is co-financed by the European Union Regional Development Fund within the framework of the ERDF Operational Program of Catalonia 2014-2020 with a grant of 50% of total cost eligible.Peer ReviewedPostprint (published version