72 research outputs found

    Superconducting Circuit Architectures Based on Waveguide Quantum Electrodynamics

    Get PDF
    Quantum science and technology provides new possibilities in processing information, simulating novel materials, and answering fundamental questions beyond the reach of classical methods. Realizing these goals relies on the advancement of physical platforms, among which superconducting circuits have been one of the leading candidates offering complete control and read-out over individual qubits and the potential to scale up. However, most circuit-based multi-qubit architectures only include nearest-neighbor (NN) coupling between qubits, which limits the efficient implementation of low-overhead quantum error correction and access to a wide range of physical models using analog quantum simulation. This challenge can be overcome by introducing non-local degrees of freedom. For example, photons in a shared channel between qubits can mediate long-range qubit-qubit coupling arising from light-matter interaction. In addition, constructing a scalable architecture requires this channel to be intrinsically extensible, in which case a one-dimensional waveguide is an ideal structure providing the extensible direction as well as strong light-matter interaction. In this thesis, we explore superconducting circuit architectures based on light-matter interactions in waveguide quantum electrodynamics (QED) systems. These architectures in return allow us to study light-matter interaction, demonstrating strong coupling in the open environment of a waveguide by employing sub-radiant states resulting from collective effects. We further engineer the waveguide dispersion to enter the topological photonics regime, exploring interactions between qubits that are mediated by photons with topological properties. Finally, towards the goals of quantum information processing and simulation, we settle into a multi-qubit architecture where the photon-mediated interaction between qubits exhibits tunable range and strength. We use this multi-qubit architecture to construct a lattice with tunable connectivity for strongly interacting microwave photons, synthesizing a quantum many-body model to explore chaotic dynamics. The architectures in this thesis introduce scalable beyond-NN coupling between superconducting qubits, opening the door to the exploration of many-body physics with long-range coupling and efficient implementation of quantum information processing protocols.</p

    Algorithmic and Coding-theoretic Methods for Group Testing and Private Information Retrieval

    Get PDF
    In the first part of this dissertation, we consider the Group Testing (GT) problem and its two variants, the Quantitative GT (QGT) problem and the Coin Weighing (CW) problem. An instance of the GT problem includes a ground set of items that includes a small subset of defective items. The GT procedure consists of a number of tests, such that each test indicates whether or not a given subset of items includes one or more defective items. The goal of the GT procedure is to identify the subset of defective items with the minimum number of tests. Motivated by practical scenarios where the outcome of the tests can be affected by noise, we focus on the noisy GT setting, in which the outcome of a test can be flipped with some probability. In the noisy GT setting, the goal is to identify the set of defective items with high probability. We investigate the performance of two variants of the Belief Propagation (BP) algorithm for decoding of noisy non-adaptive GT under the combinatorial model for defective items. Through extensive simulations, we show that the proposed algorithms achieve higher success probability and lower false-negative and false-positive rates when compared to the traditional BP algorithm. We also consider a variation of the probabilistic GT model in which the prior probability of each item to be defective is not uniform and in which there is a certain amount of side information on the distribution of the defective items available to the GT algorithm. This dissertation focuses on leveraging the side information for improving the performance of decoding algorithms for noisy GT. First, we propose a probabilistic model, referred to as an interaction model, that captures the side information about the probability distribution of the defective items. Next, we present a decoding scheme, based on BP, that leverages the interaction model to improve the decoding accuracy. Our results indicate that the proposed algorithm achieves higher success probability and lower false-negative and false-positive rates when compared to the traditional BP, especially in the high noise regime. In the QGT problem, the result of a test reveals the number of defective items in the tested group. This is in contrast to the standard GT where the result of each test is either 1 or 0 depending on whether the tested group contains any defective items or not. In this dissertation, we study the QGT problem for the combinatorial and probabilistic models of defective items. We propose non-adaptive QGT algorithms using sparse graph codes over bi-regular and irregular bipartite graphs, and binary t-error-correcting BCH codes. The proposed schemes provide exact recovery with a probabilistic guarantee, i.e. recover all the defective items with high probability. The proposed schemes outperform existing non-adaptive QGT schemes for the sub-linear regime in terms of the number of tests required to identify all defective items with high probability. The CW problem lies at the intersection of GT and compressed sensing problems. Given a collection of coins and the total weight of the coins, where the weight of each coin is an unknown integer, the problem is to determine the weight of each coin by weighing subsets of coins on a spring scale. The goal is to minimize the average number of weighings over all possible weight configurations. Toward this goal, we propose and analyze a simple and effective adaptive weighing strategy. This is the first non-trivial achievable upper bound on the minimum expected required number of weighings. In the second part of this dissertation, we focus on the private information retrieval problem. In many practical settings, the user needs to retrieve information messages from a server in a periodic manner, over multiple rounds of communication. The messages are retrieved one at a time and the identity of future requests is not known to the server. We study the private information retrieval protocols that ensure that the identities of all the messages retrieved from the server are protected. This scenario can occur in practical settings such as periodic content download from text and multimedia repositories. We refer to this problem of minimizing the rate of data download as online private information retrieval problem. Following the previous line of work by Kadhe et al., we assume that the user knows a subset of messages in the database as side information. The identities of these messages are initially unknown to the server. Focusing on scalar-linear settings, we characterize the per-round capacity, i.e., the maximum achievable download rate at each round. The key idea of our achievability scheme is to combine the data downloaded during the current round and the previous rounds with the original side information messages and use the resulting data as side information for the subsequent rounds

    Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning

    Full text link
    The paper introduces the application of information geometry to describe the ground states of Ising models by utilizing parity-check matrices of cyclic and quasi-cyclic codes on toric and spherical topologies. The approach establishes a connection between machine learning and error-correcting coding. This proposed approach has implications for the development of new embedding methods based on trapping sets. Statistical physics and number geometry applied for optimize error-correcting codes, leading to these embedding and sparse factorization methods. The paper establishes a direct connection between DNN architecture and error-correcting coding by demonstrating how state-of-the-art architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range arena can be equivalent to of block and convolutional LDPC codes (Cage-graph, Repeat Accumulate). QC codes correspond to certain types of chemical elements, with the carbon element being represented by the mixed automorphism Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix are elaborated upon in detail. The Quantum Approximate Optimization Algorithm (QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous to the back-propagation loss function landscape in training DNNs. This similarity creates a comparable problem with TS pseudo-codeword, resembling the belief propagation method. Additionally, the layer depth in QAOA correlates to the number of decoding belief propagation iterations in the Wiberg decoding tree. Overall, this work has the potential to advance multiple fields, from Information Theory, DNN architecture design (sparse and structured prior graph topology), efficient hardware design for Quantum and Classical DPU/TPU (graph, quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text overlap with arXiv:2109.08184 by other author

    Algorithms for Near-Term and Noisy Quantum Devices

    Get PDF
    Quantum computing promises to revolutionise many fields, including chemical simulations and machine learning. At the present moment those promises have not been realised, due to the large resource requirements of fault tolerant quantum computers, not excepting the scientific and engineering challenges to building a fault tolerant quantum computer. Instead, we currently have access to quantum devices that are both limited in qubit number, and have noisy qubits. This thesis deals with the challenges that these devices present, by investigating applications in quantum simulation for molecules and solid state systems, quantum machine learning, and by presenting a detailed simulation of a real ion trap device. We firstly build on a previous algorithm for state discrimination using a quantum machine learning model, and we show how to adapt the algorithm to work on a noisy device. This algorithm outperforms the analytical best POVM if ran on a noisy device. We then discuss how to build a quantum perceptron - the building block of a quantum neural network. We also present an algorithm for simulating the Dynamical Mean Field Theory (DMFT) using a quantum device, for two sites. We also discuss some of the difficul- ties found in scaling up that system, and present an algorithm for building the DMFT ansatz using the quantum device. We also discuss modifications to the algorithm that make it more ‘device-aware’. Finally we present a pule-level simulation of the noise in an ion trap device, designed to match the specifications of a device at the National Physical Laboratory (NPL), which we can use to direct future experimental focus. Each of these sections is preceded by a review of the relevant literature

    Advances in Bosonic Quantum Error Correction with Gottesman-Kitaev-Preskill Codes: Theory, Engineering and Applications

    Full text link
    Encoding quantum information into a set of harmonic oscillators is considered a hardware efficient approach to mitigate noise for reliable quantum information processing. Various codes have been proposed to encode a qubit into an oscillator -- including cat codes, binomial codes and Gottesman-Kitaev-Preskill (GKP) codes. These bosonic codes are among the first to reach a break-even point for quantum error correction. Furthermore, GKP states not only enable close-to-optimal quantum communication rates in bosonic channels, but also allow for error correction of an oscillator into many oscillators. This review focuses on the basic working mechanism, performance characterization, and the many applications of GKP codes, with emphasis on recent experimental progress in superconducting circuit architectures and theoretical progress in multimode GKP qubit codes and oscillators-to-oscillators (O2O) codes. We begin with a preliminary continuous-variable formalism needed for bosonic codes. We then proceed to the quantum engineering involved to physically realize GKP states. We take a deep dive into GKP stabilization and preparation in superconducting architectures and examine proposals for realizing GKP states in the optical domain (along with a concise review of GKP realization in trapped-ion platforms). Finally, we present multimode GKP qubits and GKP-O2O codes, examine code performance and discuss applications of GKP codes in quantum information processing tasks such as computing, communication, and sensing.Comment: 77+5 pages, 31 figures. Minor bugs fixed in v2. comments are welcome

    Reconciliation for Satellite-Based Quantum Key Distribution

    Full text link
    This thesis reports on reconciliation schemes based on Low-Density Parity-Check (LDPC) codes in Quantum Key Distribution (QKD) protocols. It particularly focuses on a trade-off between the complexity of such reconciliation schemes and the QKD key growth, a trade-off that is critical to QKD system deployments. A key outcome of the thesis is a design of optimised schemes that maximise the QKD key growth based on finite-size keys for a range of QKD protocols. Beyond this design, the other four main contributions of the thesis are summarised as follows. First, I show that standardised short-length LDPC codes can be used for a special Discrete Variable QKD (DV-QKD) protocol and highlight the trade-off between the secret key throughput and the communication latency in space-based implementations. Second, I compare the decoding time and secret key rate performances between typical LDPC-based rate-adaptive and non-adaptive schemes for different channel conditions and show that the design of Mother codes for the rate-adaptive schemes is critical but remains an open question. Third, I demonstrate a novel design strategy that minimises the probability of the reconciliation process being the bottleneck of the overall DV-QKD system whilst achieving a target QKD rate (in bits per second) with a target ceiling on the failure probability with customised LDPC codes. Fourth, in the context of Continuous Variable QKD (CV-QKD), I construct an in-depth optimisation analysis taking both the security and the reconciliation complexity into account. The outcome of the last contribution leads to a reconciliation scheme delivering the highest secret key rate for a given processor speed which allows for the optimal solution to CV-QKD reconciliation

    Finite-Length Scaling Laws for Spatially-Coupled LDPC Codes

    Get PDF
    This thesis concerns predicting the finite-length error-correcting performance of spatially-coupled low-density parity-check (SC-LDPC) code ensembles over the binary erasure channel. SC-LDPC codes are a very powerful class of codes; their use in practical communication systems, however, requires the system designer to specify a considerable number of code and decoder parameters, all of which affect both the code’s error-correcting capability and the system’s memory, energy, and latency requirements. Navigating the space of the associated trade-offs is challenging. The aim of the finite-length scaling laws proposed in this thesis is to facilitate code and decoder parameter optimization by providing a way to predict the code’s error-rate performance without resorting to Monte-Carlo simulations for each combination of code/decoder and channel parameters.First, we tackle the problem of predicting the frame, bit, and block error rate of SC-LDPC code ensembles over the binary erasure channel under both belief propagation (BP) decoding and sliding window decoding when the maximum number of decoding iterations is unlimited. The scaling laws we develop provide very accurate predictions of the error rates.Second, we derive a scaling law to accurately predict the bit and block error rate of SC-LDPC code ensembles with doping, a technique relevant for streaming applications for limiting the inherent rate loss of SC-LDPC codes. We then use the derived scaling law for code parameter optimization and show that doping can offer a way to achieve better transmission rates for the same target bit error rate than is possible without doping.Last, we address the most challenging (and most practically relevant) case where the maximum number of decoding iterations is limited, both for BP and sliding window decoding. The resulting predictions are again very accurate.Together, these contributions make finite-length SC-LDPC code and decoder parameter optimization via finite-length scaling laws feasible for the design of practical communication systems

    Physical-Layer Security, Quantum Key Distribution and Post-quantum Cryptography

    Get PDF
    The growth of data-driven technologies, 5G, and the Internet place enormous pressure on underlying information infrastructure. There exist numerous proposals on how to deal with the possible capacity crunch. However, the security of both optical and wireless networks lags behind reliable and spectrally efficient transmission. Significant achievements have been made recently in the quantum computing arena. Because most conventional cryptography systems rely on computational security, which guarantees the security against an efficient eavesdropper for a limited time, with the advancement in quantum computing this security can be compromised. To solve these problems, various schemes providing perfect/unconditional security have been proposed including physical-layer security (PLS), quantum key distribution (QKD), and post-quantum cryptography. Unfortunately, it is still not clear how to integrate those different proposals with higher level cryptography schemes. So the purpose of the Special Issue entitled “Physical-Layer Security, Quantum Key Distribution and Post-quantum Cryptography” was to integrate these various approaches and enable the next generation of cryptography systems whose security cannot be broken by quantum computers. This book represents the reprint of the papers accepted for publication in the Special Issue

    ATHENA Research Book

    Get PDF
    The ATHENA European University is an alliance of nine Higher Education Institutions with the mission of fostering excellence in research and innovation by facilitating international cooperation. The ATHENA acronym stands for Advanced Technologies in Higher Education Alliance. The partner institutions are from France, Germany, Greece, Italy, Lithuania, Portugal, and Slovenia: the University of OrlĂ©ans, the University of Siegen, the Hellenic Mediterranean University, the NiccolĂČ Cusano University, the Vilnius Gediminas Technical University, the Polytechnic Institute of Porto, and the University of Maribor. In 2022 institutions from Poland and Spain joined the alliance: the Maria Curie-SkƂodowska University and the University of Vigo. This research book presents a selection of the ATHENA university partners' research activities. It incorporates peer-reviewed original articles, reprints and student contributions. The ATHENA Research Book provides a platform that promotes joint and interdisciplinary research projects of both advanced and early-career researchers
    • 

    corecore