143 research outputs found

    Transfer learning in hybrid classical-quantum neural networks

    Get PDF
    We extend the concept of transfer learning, widely applied in modern machine learning algorithms, to the emerging context of hybrid neural networks composed of classical and quantum elements. We propose different implementations of hybrid transfer learning, but we focus mainly on the paradigm in which a pre-trained classical network is modified and augmented by a final variational quantum circuit. This approach is particularly attractive in the current era of intermediate-scale quantum technology since it allows to optimally pre-process high dimensional data (e.g., images) with any state-of-the-art classical network and to embed a select set of highly informative features into a quantum processor. We present several proof-of-concept examples of the convenient application of quantum transfer learning for image recognition and quantum state classification. We use the crossplatform software library PennyLane to experimentally test a high-resolution image classifier with two different quantum computers, respectively provided by IBM and Rigetti

    Machine learning and the physical sciences

    No full text
    Machine learning encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. We review in a selective way the recent research on the interface between machine learning and physical sciences. This includes conceptual developments in machine learning (ML) motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross-fertilization between the two fields. After giving basic notion of machine learning methods and principles, we describe examples of how statistical physics is used to understand methods in ML. We then move to describe applications of ML methods in particle physics and cosmology, quantum many body physics, quantum computing, and chemical and material physics. We also highlight research and development into novel computing architectures aimed at accelerating ML. In each of the sections we describe recent successes as well as domain-specific methodology and challenges

    Solving a Higgs optimization problem with quantum annealing for machine learning

    Get PDF
    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics

    Quantum circuits with many photons on a programmable nanophotonic chip

    Full text link
    Growing interest in quantum computing for practical applications has led to a surge in the availability of programmable machines for executing quantum algorithms. Present day photonic quantum computers have been limited either to non-deterministic operation, low photon numbers and rates, or fixed random gate sequences. Here we introduce a full-stack hardware-software system for executing many-photon quantum circuits using integrated nanophotonics: a programmable chip, operating at room temperature and interfaced with a fully automated control system. It enables remote users to execute quantum algorithms requiring up to eight modes of strongly squeezed vacuum initialized as two-mode squeezed states in single temporal modes, a fully general and programmable four-mode interferometer, and genuine photon number-resolving readout on all outputs. Multi-photon detection events with photon numbers and rates exceeding any previous quantum optical demonstration on a programmable device are made possible by strong squeezing and high sampling rates. We verify the non-classicality of the device output, and use the platform to carry out proof-of-principle demonstrations of three quantum algorithms: Gaussian boson sampling, molecular vibronic spectra, and graph similarity

    Convex optimization of programmable quantum computers

    Get PDF
    A fundamental model of quantum computation is the programmable quantum gate array. This is a quantum processor that is fed by a program state that induces a corresponding quantum operation on input states. While being programmable, any finite-dimensional design of this model is known to be non-universal, meaning that the processor cannot perfectly simulate an arbitrary quantum channel over the input. Characterizing how close the simulation is and finding the optimal program state have been open questions for the past 20 years. Here, we answer these questions by showing that the search for the optimal program state is a convex optimization problem that can be solved via semi-definite programming and gradient-based methods commonly employed for machine learning. We apply this general result to different types of processors, from a shallow design based on quantum teleportation, to deeper schemes relying on port-based teleportation and parametric quantum circuits

    Global transpiration data from sap flow measurements: The SAPFLUXNET database

    Get PDF
    Plant transpiration links physiological responses of vegetation to water supply and demand with hydrological, energy, and carbon budgets at the land-atmosphere interface. However, despite being the main land evaporative flux at the global scale, transpiration and its response to environmental drivers are currently not well constrained by observations. Here we introduce the first global compilation of whole-plant transpiration data from sap flow measurements (SAPFLUXNET, https://sapfluxnet.creaf.cat/, last access: 8 June 2021). We harmonized and quality-controlled individual datasets supplied by contributors worldwide in a semi-automatic data workflow implemented in the R programming language. Datasets include sub-daily time series of sap flow and hydrometeorological drivers for one or more growing seasons, as well as metadata on the stand characteristics, plant attributes, and technical details of the measurements. SAPFLUXNET contains 202 globally distributed datasets with sap flow time series for 2714 plants, mostly trees, of 174 species. SAPFLUXNET has a broad bioclimatic coverage, with woodland/shrubland and temperate forest biomes especially well represented (80% of the datasets). The measurements cover a wide variety of stand structural characteristics and plant sizes. The datasets encompass the period between 1995 and 2018, with 50% of the datasets being at least 3 years long. Accompanying radiation and vapour pressure deficit data are available for most of the datasets, while on-site soil water content is available for 56% of the datasets. Many datasets contain data for species that make up 90% or more of the total stand basal area, allowing the estimation of stand transpiration in diverse ecological settings. SAPFLUXNET adds to existing plant trait datasets, ecosystem flux networks, and remote sensing products to help increase our understanding of plant water use, plant responses to drought, and ecohydrological processes. SAPFLUXNET version 0.1.5 is freely available from the Zenodo repository (10.5281/zenodo.3971689; Poyatos et al., 2020a). The "sapfluxnetr"R package-designed to access, visualize, and process SAPFLUXNET data-is available from CRAN. © 2021 Rafael Poyatos et al.This research was supported by the Minis-terio de Economía y Competitividad (grant no. CGL2014-55883-JIN), the Ministerio de Ciencia e Innovación (grant no. RTI2018-095297-J-I00), the Ministerio de Ciencia e Innovación (grant no. CAS16/00207), the Agència de Gestió d’Ajuts Universitaris i de Recerca (grant no. SGR1001), the Alexander von Humboldt-Stiftung (Humboldt Research Fellowship for Experienced Researchers (RP)), and the Institució Catalana de Recerca i Estudis Avançats (Academia Award (JMV)). Víctor Flo was supported by the doctoral fellowship FPU15/03939 (MECD, Spain)

    The DUNE far detector vertical drift technology. Technical design report

    Get PDF
    DUNE is an international experiment dedicated to addressing some of the questions at the forefront of particle physics and astrophysics, including the mystifying preponderance of matter over antimatter in the early universe. The dual-site experiment will employ an intense neutrino beam focused on a near and a far detector as it aims to determine the neutrino mass hierarchy and to make high-precision measurements of the PMNS matrix parameters, including the CP-violating phase. It will also stand ready to observe supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector implements liquid argon time-projection chamber (LArTPC) technology, and combines the many tens-of-kiloton fiducial mass necessary for rare event searches with the sub-centimeter spatial resolution required to image those events with high precision. The addition of a photon detection system enhances physics capabilities for all DUNE physics drivers and opens prospects for further physics explorations. Given its size, the far detector will be implemented as a set of modules, with LArTPC designs that differ from one another as newer technologies arise. In the vertical drift LArTPC design, a horizontal cathode bisects the detector, creating two stacked drift volumes in which ionization charges drift towards anodes at either the top or bottom. The anodes are composed of perforated PCB layers with conductive strips, enabling reconstruction in 3D. Light-trap-style photon detection modules are placed both on the cryostat's side walls and on the central cathode where they are optically powered. This Technical Design Report describes in detail the technical implementations of each subsystem of this LArTPC that, together with the other far detector modules and the near detector, will enable DUNE to achieve its physics goals
    corecore