70,760 research outputs found

    Notes on entropic characteristics of quantum channels

    Full text link
    One of most important issues in quantum information theory concerns transmission of information through noisy quantum channels. We discuss few channel characteristics expressed by means of generalized entropies. Such characteristics can often be dealt in line with more usual treatment based on the von Neumann entropies. For any channel, we show that the qq-average output entropy of degree q1q\geq1 is bounded from above by the qq-entropy of the input density matrix. Concavity properties of the (q,s)(q,s)-entropy exchange are considered. Fano type quantum bounds on the (q,s)(q,s)-entropy exchange are derived. We also give upper bounds on the map (q,s)(q,s)-entropies in terms of the output entropy, corresponding to the completely mixed input.Comment: 10 pages, no figures. The statement of Proposition 1 is explicitly illustrated with the depolarizing channel. The bibliography is extended and updated. More explanations. To be published in Cent. Eur. J. Phy

    Sensitivity to Z-prime and non-standard neutrino interactions from ultra-low threshold neutrino-nucleus coherent scattering

    Get PDF
    We discuss prospects for probing Z-prime and non-standard neutrino interactions using neutrino-nucleus coherent scattering with ultra-low energy (~ 10 eV) threshold Si and Ge detectors. The analysis is performed in the context of a specific and contemporary reactor-based experimental proposal, developed in cooperation with the Nuclear Science Center at Texas A&M University, and referencing available technology based upon economical and scalable detector arrays. For expected exposures, we show that sensitivity to the Z-prime mass is on the order of several TeV, and is complementary to the LHC search with low mass detectors in the near term. This technology is also shown to provide sensitivity to the neutrino magnetic moment, at a level that surpasses terrestrial limits, and is competitive with more stringent astrophysical bounds. We demonstrate the benefits of combining silicon and germanium detectors for distinguishing between classes of models of new physics, and for suppressing correlated systematic uncertainties.Comment: As published in PRD; 13 pages, 7 figure

    Lower bounds on the non-Clifford resources for quantum computations

    Full text link
    We establish lower-bounds on the number of resource states, also known as magic states, needed to perform various quantum computing tasks, treating stabilizer operations as free. Our bounds apply to adaptive computations using measurements and an arbitrary number of stabilizer ancillas. We consider (1) resource state conversion, (2) single-qubit unitary synthesis, and (3) computational tasks. To prove our resource conversion bounds we introduce two new monotones, the stabilizer nullity and the dyadic monotone, and make use of the already-known stabilizer extent. We consider conversions that borrow resource states, known as catalyst states, and return them at the end of the algorithm. We show that catalysis is necessary for many conversions and introduce new catalytic conversions, some of which are close to optimal. By finding a canonical form for post-selected stabilizer computations, we show that approximating a single-qubit unitary to within diamond-norm precision ε\varepsilon requires at least 1/7log2(1/ε)4/31/7\cdot\log_2(1/\varepsilon) - 4/3 TT-states on average. This is the first lower bound that applies to synthesis protocols using fall-back, mixing techniques, and where the number of ancillas used can depend on ε\varepsilon. Up to multiplicative factors, we optimally lower bound the number of TT or CCZCCZ states needed to implement the ubiquitous modular adder and multiply-controlled-ZZ operations. When the probability of Pauli measurement outcomes is 1/2, some of our bounds become tight to within a small additive constant.Comment: 62 page

    Optimal Information Retrieval with Complex Utility Functions

    Get PDF
    Existing retrieval models all attempt to optimize one single utility function, which is often based on the topical relevance of a document with respect to a query. In real applications, retrieval involves more complex utility functions that may involve preferences on several different dimensions. In this paper, we present a general optimization framework for retrieval with complex utility functions. A query language is designed according to this framework to enable users to submit complex queries. We propose an efficient algorithm for retrieval with complex utility functions based on the a-priori algorithm. As a case study, we apply our algorithm to a complex utility retrieval problem in distributed IR. Experiment results show that our algorithm allows for flexible tradeoff between multiple retrieval criteria. Finally, we study the efficiency issue of our algorithm on simulated data

    Heavy Color-Octet Particles at the LHC

    Full text link
    Many new-physics models, especially those with a color-triplet top-quark partner, contain a heavy color-octet state. The "naturalness" argument for a light Higgs boson requires that the color-octet state be not much heavier than a TeV, and thus it can be pair-produced with large cross sections at high-energy hadron colliders. It may decay preferentially to a top quark plus a top-partner, which subsequently decays to a top quark plus a color-singlet state. This singlet can serve as a WIMP dark-matter candidate. Such decay chains lead to a spectacular signal of four top quarks plus missing energy. We pursue a general categorization of the color-octet states and their decay products according to their spin and gauge quantum numbers. We review the current bounds on the new states at the LHC and study the expected discovery reach at the 8-TeV and 14-TeV runs. We also present the production rates at a future 100-TeV hadron collider, where the cross sections will be many orders of magnitude greater than at the 14-TeV LHC. Furthermore, we explore the extent to which one can determine the color octet's mass, spin, and chiral couplings. Finally, we propose a test to determine whether the fermionic color octet is a Majorana particle.Comment: 20 pages, 9 figures; journal versio

    Multi-qubit Randomized Benchmarking Using Few Samples

    Full text link
    Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity. These estimates are isolated from the state preparation and measurement errors that plague other methods like channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of sampled sequences required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here we show that, with a small adaptation to the randomized benchmarking procedure, the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits and scales favorably with the average error rate of the system under investigation. We also show that the number of samples required for long sequence lengths can be made substantially smaller than previous rigorous results (even for single qubits) as long as the noise process under investigation is not unitary. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.Comment: v3: Added discussion of the impact of variance heteroskedasticity on the RB fitting procedure. Close to published versio

    Lambda's, V's and optimal cloning with stimulated emission

    Full text link
    We show that optimal universal cloning of the polarization state of photons can be achieved via stimulated emission in three-level systems, both of the Lambda and the V type. We establish the equivalence of our systems with coupled harmonic oscillators, which permits us to analyze the structure of the cloning transformations realized. These transformations are shown to be equivalent to the optimal cloning transformations for qubits discovered by Buzek and Hillery, and Gisin and Massar. The down-conversion cloner discovered previously by some of the authors is obtained as a limiting case. We demonstrate an interesting equivalence between systems of Lambda atoms and systems of pairwise entangled V atoms. Finally we discuss the physical differences between our photon cloners and the qubit cloners considered previously and prove that the bounds on the fidelity of the clones derived for qubits also apply in our situation.Comment: 10 page

    Plantinga-Vegter algorithm takes average polynomial time

    Full text link
    We exhibit a condition-based analysis of the adaptive subdivision algorithm due to Plantinga and Vegter. The first complexity analysis of the PV Algorithm is due to Burr, Gao and Tsigaridas who proved a O(2τd4logd)O\big(2^{\tau d^{4}\log d}\big) worst-case cost bound for degree dd plane curves with maximum coefficient bit-size τ\tau. This exponential bound, it was observed, is in stark contrast with the good performance of the algorithm in practice. More in line with this performance, we show that, with respect to a broad family of measures, the expected time complexity of the PV Algorithm is bounded by O(d7)O(d^7) for real, degree dd, plane curves. We also exhibit a smoothed analysis of the PV Algorithm that yields similar complexity estimates. To obtain these results we combine robust probabilistic techniques coming from geometric functional analysis with condition numbers and the continuous amortization paradigm introduced by Burr, Krahmer and Yap. We hope this will motivate a fruitful exchange of ideas between the different approaches to numerical computation.Comment: 8 pages, correction of typo
    corecore