5,003 research outputs found

    Quantum Reverse Shannon Theorem

    Get PDF
    Dual to the usual noisy channel coding problem, where a noisy (classical or quantum) channel is used to simulate a noiseless one, reverse Shannon theorems concern the use of noiseless channels to simulate noisy ones, and more generally the use of one noisy channel to simulate another. For channels of nonzero capacity, this simulation is always possible, but for it to be efficient, auxiliary resources of the proper kind and amount are generally required. In the classical case, shared randomness between sender and receiver is a sufficient auxiliary resource, regardless of the nature of the source, but in the quantum case the requisite auxiliary resources for efficient simulation depend on both the channel being simulated, and the source from which the channel inputs are coming. For tensor power sources (the quantum generalization of classical IID sources), entanglement in the form of standard ebits (maximally entangled pairs of qubits) is sufficient, but for general sources, which may be arbitrarily correlated or entangled across channel inputs, additional resources, such as entanglement-embezzling states or backward communication, are generally needed. Combining existing and new results, we establish the amounts of communication and auxiliary resources needed in both the classical and quantum cases, the tradeoffs among them, and the loss of simulation efficiency when auxiliary resources are absent or insufficient. In particular we find a new single-letter expression for the excess forward communication cost of coherent feedback simulations of quantum channels (i.e. simulations in which the sender retains what would escape into the environment in an ordinary simulation), on non-tensor-power sources in the presence of unlimited ebits but no other auxiliary resource. Our results on tensor power sources establish a strong converse to the entanglement-assisted capacity theorem.Comment: 35 pages, to appear in IEEE-IT. v2 has a fixed proof of the Clueless Eve result, a new single-letter formula for the "spread deficit", better error scaling, and an improved strong converse. v3 and v4 each make small improvements to the presentation and add references. v5 fixes broken reference

    Toward 6G TKμ\mu Extreme Connectivity: Architecture, Key Technologies and Experiments

    Full text link
    Sixth-generation (6G) networks are evolving towards new features and order-of-magnitude enhancement of systematic performance metrics compared to the current 5G. In particular, the 6G networks are expected to achieve extreme connectivity performance with Tbps-scale data rate, Kbps/Hz-scale spectral efficiency, and μ\mus-scale latency. To this end, an original three-layer 6G network architecture is designed to realise uniform full-spectrum cell-free radio access and provide task-centric agile proximate support for diverse applications. The designed architecture is featured by super edge node (SEN) which integrates connectivity, computing, AI, data, etc. On this basis, a technological framework of pervasive multi-level (PML) AI is established in the centralised unit to enable task-centric near-real-time resource allocation and network automation. We then introduce a radio access network (RAN) architecture of full spectrum uniform cell-free networks, which is among the most attractive RAN candidates for 6G TKμ\mu extreme connectivity. A few most promising key technologies, i.e., cell-free massive MIMO, photonics-assisted Terahertz wireless access and spatiotemporal two-dimensional channel coding are further discussed. A testbed is implemented and extensive trials are conducted to evaluate innovative technologies and methodologies. The proposed 6G network architecture and technological framework demonstrate exciting potentials for full-service and full-scenario applications.Comment: 15 pages, 12 figure

    Entanglement cost and quantum channel simulation

    Get PDF
    This paper proposes a revised definition for the entanglement cost of a quantum channel N\mathcal{N}. In particular, it is defined here to be the smallest rate at which entanglement is required, in addition to free classical communication, in order to simulate nn calls to N\mathcal{N}, such that the most general discriminator cannot distinguish the nn calls to N\mathcal{N} from the simulation. The most general discriminator is one who tests the channels in a sequential manner, one after the other, and this discriminator is known as a quantum tester [Chiribella et al., Phys. Rev. Lett., 101, 060401 (2008)] or one who is implementing a quantum co-strategy [Gutoski et al., Symp. Th. Comp., 565 (2007)]. As such, the proposed revised definition of entanglement cost of a quantum channel leads to a rate that cannot be smaller than the previous notion of a channel's entanglement cost [Berta et al., IEEE Trans. Inf. Theory, 59, 6779 (2013)], in which the discriminator is limited to distinguishing parallel uses of the channel from the simulation. Under this revised notion, I prove that the entanglement cost of certain teleportation-simulable channels is equal to the entanglement cost of their underlying resource states. Then I find single-letter formulas for the entanglement cost of some fundamental channel models, including dephasing, erasure, three-dimensional Werner--Holevo channels, epolarizing channels (complements of depolarizing channels), as well as single-mode pure-loss and pure-amplifier bosonic Gaussian channels. These examples demonstrate that the resource theory of entanglement for quantum channels is not reversible. Finally, I discuss how to generalize the basic notions to arbitrary resource theories.Comment: 28 pages, 7 figure

    Quantum-classical generative models for machine learning

    Get PDF
    The combination of quantum and classical computational resources towards more effective algorithms is one of the most promising research directions in computer science. In such a hybrid framework, existing quantum computers can be used to their fullest extent and for practical applications. Generative modeling is one of the applications that could benefit the most, either by speeding up the underlying sampling methods or by unlocking more general models. In this work, we design a number of hybrid generative models and validate them on real hardware and datasets. The quantum-assisted Boltzmann machine is trained to generate realistic artificial images on quantum annealers. Several challenges in state-of-the-art annealers shall be overcome before one can assess their actual performance. We attack some of the most pressing challenges such as the sparse qubit-to-qubit connectivity, the unknown effective-temperature, and the noise on the control parameters. In order to handle datasets of realistic size and complexity, we include latent variables and obtain a more general model called the quantum-assisted Helmholtz machine. In the context of gate-based computers, the quantum circuit Born machine is trained to encode a target probability distribution in the wavefunction of a set of qubits. We implement this model on a trapped ion computer using low-depth circuits and native gates. We use the generative modeling performance on the canonical Bars-and-Stripes dataset to design a benchmark for hybrid systems. It is reasonable to expect that quantum data, i.e., datasets of wavefunctions, will become available in the future. We derive a quantum generative adversarial network that works with quantum data. Here, two circuits are optimized in tandem: one tries to generate suitable quantum states, the other tries to distinguish between target and generated states

    Gaussian Quantum Information

    Get PDF
    The science of quantum information has arisen over the last two decades centered on the manipulation of individual quanta of information, known as quantum bits or qubits. Quantum computers, quantum cryptography and quantum teleportation are among the most celebrated ideas that have emerged from this new field. It was realized later on that using continuous-variable quantum information carriers, instead of qubits, constitutes an extremely powerful alternative approach to quantum information processing. This review focuses on continuous-variable quantum information processes that rely on any combination of Gaussian states, Gaussian operations, and Gaussian measurements. Interestingly, such a restriction to the Gaussian realm comes with various benefits, since on the theoretical side, simple analytical tools are available and, on the experimental side, optical components effecting Gaussian processes are readily available in the laboratory. Yet, Gaussian quantum information processing opens the way to a wide variety of tasks and applications, including quantum communication, quantum cryptography, quantum computation, quantum teleportation, and quantum state and channel discrimination. This review reports on the state of the art in this field, ranging from the basic theoretical tools and landmark experimental realizations to the most recent successful developments.Comment: 51 pages, 7 figures, submitted to Reviews of Modern Physic

    Signal Processing and Learning for Next Generation Multiple Access in 6G

    Full text link
    Wireless communication systems to date primarily rely on the orthogonality of resources to facilitate the design and implementation, from user access to data transmission. Emerging applications and scenarios in the sixth generation (6G) wireless systems will require massive connectivity and transmission of a deluge of data, which calls for more flexibility in the design concept that goes beyond orthogonality. Furthermore, recent advances in signal processing and learning have attracted considerable attention, as they provide promising approaches to various complex and previously intractable problems of signal processing in many fields. This article provides an overview of research efforts to date in the field of signal processing and learning for next-generation multiple access, with an emphasis on massive random access and non-orthogonal multiple access. The promising interplay with new technologies and the challenges in learning-based NGMA are discussed

    Enabling Technologies for Ultra-Reliable and Low Latency Communications: From PHY and MAC Layer Perspectives

    Full text link
    © 1998-2012 IEEE. Future 5th generation networks are expected to enable three key services-enhanced mobile broadband, massive machine type communications and ultra-reliable and low latency communications (URLLC). As per the 3rd generation partnership project URLLC requirements, it is expected that the reliability of one transmission of a 32 byte packet will be at least 99.999% and the latency will be at most 1 ms. This unprecedented level of reliability and latency will yield various new applications, such as smart grids, industrial automation and intelligent transport systems. In this survey we present potential future URLLC applications, and summarize the corresponding reliability and latency requirements. We provide a comprehensive discussion on physical (PHY) and medium access control (MAC) layer techniques that enable URLLC, addressing both licensed and unlicensed bands. This paper evaluates the relevant PHY and MAC techniques for their ability to improve the reliability and reduce the latency. We identify that enabling long-term evolution to coexist in the unlicensed spectrum is also a potential enabler of URLLC in the unlicensed band, and provide numerical evaluations. Lastly, this paper discusses the potential future research directions and challenges in achieving the URLLC requirements

    Reinforcement Learning in Different Phases of Quantum Control

    Get PDF
    The ability to prepare a physical system in a desired quantum state is central to many areas of physics such as nuclear magnetic resonance, cold atoms, and quantum computing. Yet, preparing states quickly and with high fidelity remains a formidable challenge. In this work we implement cutting-edge Reinforcement Learning (RL) techniques and show that their performance is comparable to optimal control methods in the task of finding short, high-fidelity driving protocol from an initial to a target state in non-integrable many-body quantum systems of interacting qubits. RL methods learn about the underlying physical system solely through a single scalar reward (the fidelity of the resulting state) calculated from numerical simulations of the physical system. We further show that quantum state manipulation, viewed as an optimization problem, exhibits a spin-glass-like phase transition in the space of protocols as a function of the protocol duration. Our RL-aided approach helps identify variational protocols with nearly optimal fidelity, even in the glassy phase, where optimal state manipulation is exponentially hard. This study highlights the potential usefulness of RL for applications in out-of-equilibrium quantum physics.Comment: A legend for the videos referred to in the paper is available on https://mgbukov.github.io/RL_movies
    corecore