3,871 research outputs found

    Readout methods and devices for Josephson-junction-based solid-state qubits

    Full text link
    We discuss the current situation concerning measurement and readout of Josephson-junction based qubits. In particular we focus attention of dispersive low-dissipation techniques involving reflection of radiation from an oscillator circuit coupled to a qubit, allowing single-shot determination of the state of the qubit. In particular we develop a formalism describing a charge qubit read out by measuring its effective (quantum) capacitance. To exemplify, we also give explicit formulas for the readout time.Comment: 20 pages, 7 figures. To be published in J. Phys.: Condensed Matter, 18 (2006) Special issue: Quantum computin

    Graphical Structures for Design and Verification of Quantum Error Correction

    Get PDF
    We introduce a high-level graphical framework for designing and analysing quantum error correcting codes, centred on what we term the coherent parity check (CPC). The graphical formulation is based on the diagrammatic tools of the zx-calculus of quantum observables. The resulting framework leads to a construction for stabilizer codes that allows us to design and verify a broad range of quantum codes based on classical ones, and that gives a means of discovering large classes of codes using both analytical and numerical methods. We focus in particular on the smaller codes that will be the first used by near-term devices. We show how CSS codes form a subset of CPC codes and, more generally, how to compute stabilizers for a CPC code. As an explicit example of this framework, we give a method for turning almost any pair of classical [n,k,3] codes into a [[2n - k + 2, k, 3]] CPC code. Further, we give a simple technique for machine search which yields thousands of potential codes, and demonstrate its operation for distance 3 and 5 codes. Finally, we use the graphical tools to demonstrate how Clifford computation can be performed within CPC codes. As our framework gives a new tool for constructing small- to medium-sized codes with relatively high code rates, it provides a new source for codes that could be suitable for emerging devices, while its zx-calculus foundations enable natural integration of error correction with graphical compiler toolchains. It also provides a powerful framework for reasoning about all stabilizer quantum error correction codes of any size.Comment: Computer code associated with this paper may be found at https://doi.org/10.15128/r1bn999672

    A Method for Finding Structured Sparse Solutions to Non-negative Least Squares Problems with Applications

    Full text link
    Demixing problems in many areas such as hyperspectral imaging and differential optical absorption spectroscopy (DOAS) often require finding sparse nonnegative linear combinations of dictionary elements that match observed data. We show how aspects of these problems, such as misalignment of DOAS references and uncertainty in hyperspectral endmembers, can be modeled by expanding the dictionary with grouped elements and imposing a structured sparsity assumption that the combinations within each group should be sparse or even 1-sparse. If the dictionary is highly coherent, it is difficult to obtain good solutions using convex or greedy methods, such as non-negative least squares (NNLS) or orthogonal matching pursuit. We use penalties related to the Hoyer measure, which is the ratio of the l1l_1 and l2l_2 norms, as sparsity penalties to be added to the objective in NNLS-type models. For solving the resulting nonconvex models, we propose a scaled gradient projection algorithm that requires solving a sequence of strongly convex quadratic programs. We discuss its close connections to convex splitting methods and difference of convex programming. We also present promising numerical results for example DOAS analysis and hyperspectral demixing problems.Comment: 38 pages, 14 figure

    A convex selective segmentation model based on a piece-wise constant metric guided edge detector function

    Get PDF
    The challenge of segmentation for noisy images, especially those that have light in their backgrounds, is still exists in many advanced state-of-the-art segmentation models. Furthermore, it is significantly difficult to segment such images. In this article, we provide a novel variational model for the simultaneous restoration and segmentation of noisy images that have intensity inhomogeneity and high contrast background illumination and light. The suggested concept combines the multi-phase segmentation technology with the statistical approach in terms of local region knowledge and details of circular regions that are, in fact, centered at every pixel to enable in-homogeneous image restoration. The suggested model is expressed as a fuzzy set and is resolved using the multiplier alternating direction minimization approach. Through several tests and numerical simulations with plausible assumptions, we have evaluated the accuracy and resilience of the proposed approach over various kinds of real and synthesized images in the existence of intensity inhomogeneity and light in the background. Additionally, the findings are contrasted with those from cutting-edge two-phase and multi-phase methods, proving the superiority of our proposed approach for images with noise, background light, and inhomogeneity

    Freely Scalable Quantum Technologies using Cells of 5-to-50 Qubits with Very Lossy and Noisy Photonic Links

    Full text link
    Exquisite quantum control has now been achieved in small ion traps, in nitrogen-vacancy centres and in superconducting qubit clusters. We can regard such a system as a universal cell with diverse technological uses from communication to large-scale computing, provided that the cell is able to network with others and overcome any noise in the interlinks. Here we show that loss-tolerant entanglement purification makes quantum computing feasible with the noisy and lossy links that are realistic today: With a modestly complex cell design, and using a surface code protocol with a network noise threshold of 13.3%, we find that interlinks which attempt entanglement at a rate of 2MHz but suffer 98% photon loss can result in kilohertz computer clock speeds (i.e. rate of high fidelity stabilizer measurements). Improved links would dramatically increase the clock speed. Our simulations employed local gates of a fidelity already achieved in ion trap devices.Comment: corrected typos, additional references, additional figur
    corecore