241 research outputs found

    Coherent control of quantum systems as a resource theory

    Full text link
    Control at the interface between the classical and the quantum world is fundamental in quantum physics. In particular, how classical control is enhanced by coherence effects is an important question both from a theoretical as well as from a technological point of view. In this work, we establish a resource theory describing this setting and explore relations to the theory of coherence, entanglement and information processing. Specifically, for the coherent control of quantum systems the relevant resources of entanglement and coherence are found to be equivalent and closely related to a measure of discord. The results are then applied to the DQC1 protocol and the precision of the final measurement is expressed in terms of the available resources.Comment: 9 pages, 4 figures, final version. Discussions were improved and some points were clarified. The title was slightly changed to agree with the published versio

    Strong quantitative benchmarking of quantum optical devices

    Full text link
    Quantum communication devices, such as quantum repeaters, quantum memories, or quantum channels, are unavoidably exposed to imperfections. However, the presence of imperfections can be tolerated, as long as we can verify such devices retain their quantum advantages. Benchmarks based on witnessing entanglement have proven useful for verifying the true quantum nature of these devices. The next challenge is to characterize how strongly a device is within the quantum domain. We present a method, based on entanglement measures and rigorous state truncation, which allows us to characterize the degree of quantumness of optical devices. This method serves as a quantitative extension to a large class of previously-known quantum benchmarks, requiring no additional information beyond what is already used for the non-quantitative benchmarks.Comment: 11 pages, 7 figures. Comments are welcome. ver 2: Improved figures, no changes to main tex

    Enzyme Immobilization Strategies and Electropolymerization Conditions to Control Sensitivity and Selectivity Parameters of a Polymer-Enzyme Composite Glucose Biosensor

    Get PDF
    In an ongoing programme to develop characterization strategies relevant to biosensors for in-vivo monitoring, glucose biosensors were fabricated by immobilizing the enzyme glucose oxidase (GOx) on 125 μm diameter Pt cylinder wire electrodes (PtC), using three different methods: before, after or during the amperometric electrosynthesis of poly(ortho-phenylenediamine), PoPD, which also served as a permselective membrane. These electrodes were calibrated with H2O2 (the biosensor enzyme signal molecule), glucose, and the archetypal interference compound ascorbic acid (AA) to determine the relevant polymer permeabilities and the apparent Michaelis-Menten parameters for glucose. A number of selectivity parameters were used to identify the most successful design in terms of the balance between substrate sensitivity and interference blocking. For biosensors electrosynthesized in neutral buffer under the present conditions, entrapment of the GOx within the PoPD layer produced the design (PtC/PoPD-GOx) with the highest linear sensitivity to glucose (5.0 ± 0.4 μA cm−2 mM−1), good linear range (KM = 16 ± 2 mM) and response time (< 2 s), and the greatest AA blocking (99.8% for 1 mM AA). Further optimization showed that fabrication of PtC/PoPD-GOx in the absence of added background electrolyte (i.e., electropolymerization in unbuffered enzyme-monomer solution) enhanced glucose selectivity 3-fold for this one-pot fabrication protocol which provided AA-rejection levels at least equal to recent multi-step polymer bilayer biosensor designs. Interestingly, the presence of enzyme protein in the polymer layer had opposite effects on permselectivity for low and high concentrations of AA, emphasizing the value of studying the concentration dependence of interference effects which is rarely reported in the literature

    Molecular and Clinical Analyses of Greig Cephalopolysyndactyly and Pallister-Hall Syndromes: Robust Phenotype Prediction from the Type and Position of GLI3 Mutations

    Get PDF
    Mutations in the GLI3 zinc-finger transcription factor gene cause Greig cephalopolysyndactyly syndrome (GCPS) and Pallister-Hall syndrome (PHS), which are variable but distinct clinical entities. We hypothesized that GLI3 mutations that predict a truncated functional repressor protein cause PHS and that functional haploinsufficiency of GLI3 causes GCPS. To test these hypotheses, we screened patients with PHS and GCPS for GLI3 mutations. The patient group consisted of 135 individuals: 89 patients with GCPS and 46 patients with PHS. We detected 47 pathological mutations (among 60 probands); when these were combined with previously published mutations, two genotype-phenotype correlations were evident. First, GCPS was caused by many types of alterations, including translocations, large deletions, exonic deletions and duplications, small in-frame deletions, and missense, frameshift/nonsense, and splicing mutations. In contrast, PHS was caused only by frameshift/nonsense and splicing mutations. Second, among the frameshift/nonsense mutations, there was a clear genotype-phenotype correlation. Mutations in the first third of the gene (from open reading frame [ORF] nucleotides [nt] 1-1997) caused GCPS, and mutations in the second third of the gene (from ORF nt 1998-3481) caused primarily PHS. Surprisingly, there were 12 mutations in patients with GCPS in the 3\u27 third of the gene (after ORF nt 3481), and no patients with PHS had mutations in this region. These results demonstrate a robust correlation of genotype and phenotype for GLI3 mutations and strongly support the hypothesis that these two allelic disorders have distinct modes of pathogenesis

    Transfer learning in hybrid classical-quantum neural networks

    Get PDF
    We extend the concept of transfer learning, widely applied in modern machine learning algorithms, to the emerging context of hybrid neural networks composed of classical and quantum elements. We propose different implementations of hybrid transfer learning, but we focus mainly on the paradigm in which a pre-trained classical network is modified and augmented by a final variational quantum circuit. This approach is particularly attractive in the current era of intermediate-scale quantum technology since it allows to optimally pre-process high dimensional data (e.g., images) with any state-of-the-art classical network and to embed a select set of highly informative features into a quantum processor. We present several proof-of-concept examples of the convenient application of quantum transfer learning for image recognition and quantum state classification. We use the crossplatform software library PennyLane to experimentally test a high-resolution image classifier with two different quantum computers, respectively provided by IBM and Rigetti

    Contextuality and inductive bias in quantum machine learning

    Full text link
    Generalisation in machine learning often relies on the ability to encode structures present in data into an inductive bias of the model class. To understand the power of quantum machine learning, it is therefore crucial to identify the types of data structures that lend themselves naturally to quantum models. In this work we look to quantum contextuality -- a form of nonclassicality with links to computational advantage -- for answers to this question. We introduce a framework for studying contextuality in machine learning, which leads us to a definition of what it means for a learning model to be contextual. From this, we connect a central concept of contextuality, called operational equivalence, to the ability of a model to encode a linearly conserved quantity in its label space. A consequence of this connection is that contextuality is tied to expressivity: contextual model classes that encode the inductive bias are generally more expressive than their noncontextual counterparts. To demonstrate this, we construct an explicit toy learning problem -- based on learning the payoff behaviour of a zero-sum game -- for which this is the case. By leveraging tools from geometric quantum machine learning, we then describe how to construct quantum learning models with the associated inductive bias, and show through our toy problem that they outperform their corresponding classical surrogate models. This suggests that understanding learning problems of this form may lead to useful insights about the power of quantum machine learning.Comment: comments welcom

    Quantum benchmarking with realistic states of light

    Full text link
    The goal of quantum benchmarking is to certify that imperfect quantum communication devices (e.g., quantum channels, quantum memories, quantum key distribution systems) can still be used for meaningful quantum communication. However, the test states used in quantum benchmarking experiments may be imperfect as well. Many quantum benchmarks are only valid for states which match some ideal form, such as pure states or Gaussian states. We outline how to perform quantum benchmarking using arbitrary states of light. We demonstrate these results using real data taken from a continuous-variable quantum memory.Comment: 14 pages, 3 figures. Updated to more closely match the published versio
    • …
    corecore