33,768 research outputs found

    The capacity of hybrid quantum memory

    Get PDF
    The general stable quantum memory unit is a hybrid consisting of a classical digit with a quantum digit (qudit) assigned to each classical state. The shape of the memory is the vector of sizes of these qudits, which may differ. We determine when N copies of a quantum memory A embed in N(1+o(1)) copies of another quantum memory B. This relationship captures the notion that B is as at least as useful as A for all purposes in the bulk limit. We show that the embeddings exist if and only if for all p >= 1, the p-norm of the shape of A does not exceed the p-norm of the shape of B. The log of the p-norm of the shape of A can be interpreted as the maximum of S(\rho) + H(\rho)/p (quantum entropy plus discounted classical entropy) taken over all mixed states \rho on A. We also establish a noiseless coding theorem that justifies these entropies. The noiseless coding theorem and the bulk embedding theorem together say that either A blindly bulk-encodes into B with perfect fidelity, or A admits a state that does not visibly bulk-encode into B with high fidelity. In conclusion, the utility of a hybrid quantum memory is determined by its simultaneous capacity for classical and quantum entropy, which is not a finite list of numbers, but rather a convex region in the classical-quantum entropy plane.Comment: 10 pages, 1 figures. Major revision; extra material could have been a new paper. Has a much better treatment of noiseless coding and a new Holder inequality for memory squeezing. To appear in IEEE Trans. Inf. Theor

    Quantum and Classical Message Identification via Quantum Channels

    Full text link
    We discuss concepts of message identification in the sense of Ahlswede and Dueck via general quantum channels, extending investigations for classical channels, initial work for classical-quantum (cq) channels and "quantum fingerprinting". We show that the identification capacity of a discrete memoryless quantum channel for classical information can be larger than that for transmission; this is in contrast to all previously considered models, where it turns out to equal the common randomness capacity (equals transmission capacity in our case): in particular, for a noiseless qubit, we show the identification capacity to be 2, while transmission and common randomness capacity are 1. Then we turn to a natural concept of identification of quantum messages (i.e. a notion of "fingerprint" for quantum states). This is much closer to quantum information transmission than its classical counterpart (for one thing, the code length grows only exponentially, compared to double exponentially for classical identification). Indeed, we show how the problem exhibits a nice connection to visible quantum coding. Astonishingly, for the noiseless qubit channel this capacity turns out to be 2: in other words, one can compress two qubits into one and this is optimal. In general however, we conjecture quantum identification capacity to be different from classical identification capacity.Comment: 18 pages, requires Rinton-P9x6.cls. On the occasion of Alexander Holevo's 60th birthday. Version 2 has a few theorems knocked off: Y Steinberg has pointed out a crucial error in my statements on simultaneous ID codes. They are all gone and replaced by a speculative remark. The central results of the paper are all unharmed. In v3: proof of Proposition 17 corrected, without change of its statemen

    Main memory in HPC: do we need more, or could we live with less?

    Get PDF
    An important aspect of High-Performance Computing (HPC) system design is the choice of main memory capacity. This choice becomes increasingly important now that 3D-stacked memories are entering the market. Compared with conventional Dual In-line Memory Modules (DIMMs), 3D memory chiplets provide better performance and energy efficiency but lower memory capacities. Therefore, the adoption of 3D-stacked memories in the HPC domain depends on whether we can find use cases that require much less memory than is available now. This study analyzes the memory capacity requirements of important HPC benchmarks and applications. We find that the High-Performance Conjugate Gradients (HPCG) benchmark could be an important success story for 3D-stacked memories in HPC, but High-Performance Linpack (HPL) is likely to be constrained by 3D memory capacity. The study also emphasizes that the analysis of memory footprints of production HPC applications is complex and that it requires an understanding of application scalability and target category, i.e., whether the users target capability or capacity computing. The results show that most of the HPC applications under study have per-core memory footprints in the range of hundreds of megabytes, but we also detect applications and use cases that require gigabytes per core. Overall, the study identifies the HPC applications and use cases with memory footprints that could be provided by 3D-stacked memory chiplets, making a first step toward adoption of this novel technology in the HPC domain.This work was supported by the Collaboration Agreement between Samsung Electronics Co., Ltd. and BSC, Spanish Government through Severo Ochoa programme (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316-P project and by the Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272). This work has also received funding from the European Union’s Horizon 2020 research and innovation programme under ExaNoDe project (grant agreement No 671578). Darko Zivanovic holds the Severo Ochoa grant (SVP-2014-068501) of the Ministry of Economy and Competitiveness of Spain. The authors thank Harald Servat from BSC and Vladimir Marjanovi´c from High Performance Computing Center Stuttgart for their technical support.Postprint (published version
    corecore