125 research outputs found

    Timely and Massive Communication in 6G: Pragmatics, Learning, and Inference

    Full text link
    5G has expanded the traditional focus of wireless systems to embrace two new connectivity types: ultra-reliable low latency and massive communication. The technology context at the dawn of 6G is different from the past one for 5G, primarily due to the growing intelligence at the communicating nodes. This has driven the set of relevant communication problems beyond reliable transmission towards semantic and pragmatic communication. This paper puts the evolution of low-latency and massive communication towards 6G in the perspective of these new developments. At first, semantic/pragmatic communication problems are presented by drawing parallels to linguistics. We elaborate upon the relation of semantic communication to the information-theoretic problems of source/channel coding, while generalized real-time communication is put in the context of cyber-physical systems and real-time inference. The evolution of massive access towards massive closed-loop communication is elaborated upon, enabling interactive communication, learning, and cooperation among wireless sensors and actuators.Comment: Submitted for publication to IEEE BITS (revised version preprint

    Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications

    Full text link
    Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications.Comment: 28 pages, 14 figure

    A hardware spinal decoder

    Get PDF
    Spinal codes are a recently proposed capacity-achieving rateless code. While hardware encoding of spinal codes is straightforward, the design of an efficient, high-speed hardware decoder poses significant challenges. We present the first such decoder. By relaxing data dependencies inherent in the classic M-algorithm decoder, we obtain area and throughput competitive with 3GPP turbo codes as well as greatly reduced latency and complexity. The enabling architectural feature is a novel alpha-beta incremental approximate selection algorithm. We also present a method for obtaining hints which anticipate successful or failed decoding, permitting early termination and/or feedback-driven adaptation of the decoding parameters. We have validated our implementation in FPGA with on-air testing. Provisional hardware synthesis suggests that a near-capacity implementation of spinal codes can achieve a throughput of 12.5 Mbps in a 65 nm technology while using substantially less area than competitive 3GPP turbo code implementations.Irwin Mark Jacobs and Joan Klein Jacobs Presidential FellowshipIntel Corporation (Fellowship)Claude E. Shannon Research Assistantshi

    Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications

    Get PDF
    Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, thereby providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications

    Validation of the HERA Phase i Epoch of Reionization 21 cm Power Spectrum Software Pipeline

    Get PDF
    We describe the validation of the HERA Phase I software pipeline by a series of modular tests, building up to an end-to-end simulation. The philosophy of this approach is to validate the software and algorithms used in the Phase I upper-limit analysis on wholly synthetic data satisfying the assumptions of that analysis, not addressing whether the actual data meet these assumptions. We discuss the organization of this validation approach, the specific modular tests performed, and the construction of the end-to-end simulations. We explicitly discuss the limitations in scope of the current simulation effort. With mock visibility data generated from a known analytic power spectrum and a wide range of realistic instrumental effects and foregrounds, we demonstrate that the current pipeline produces power spectrum estimates that are consistent with known analytic inputs to within thermal noise levels (at the 2σ level) for k > 0.2h Mpc-1 for both bands and fields considered. Our input spectrum is intentionally amplified to enable a strong "detection"at k ∼ 0.2 h Mpc-1 - at the level of ∼25σ - with foregrounds dominating on larger scales and thermal noise dominating at smaller scales. Our pipeline is able to detect this amplified input signal after suppressing foregrounds with a dynamic range (foreground to noise ratio) of ⪆107. Our validation test suite uncovered several sources of scale-independent signal loss throughout the pipeline, whose amplitude is well-characterized and accounted for in the final estimates. We conclude with a discussion of the steps required for the next round of data analysis

    Ultra-red Galaxies Signpost Candidate Protoclusters at High Redshift

    Get PDF
    We present images obtained with LABOCA of a sample of 22 galaxies selected via their red Herschel SPIRE colors. We aim to see if these luminous, rare, and distant galaxies are signposting dense regions in the early universe. Our 870 μm survey covers an area of ≈1 deg2 down to an average rms of 3.9 mJy beam−13.9\,\mathrm{mJy}\,{\mathrm{beam}}^{-1}, with our five deepest maps going ≈2× deeper still. We catalog 86 dusty star-forming galaxies (DSFGs) around our "signposts," detected above a significance of 3.5σ. This implies a 100−30+30%{100}_{-30}^{+30} \% overdensity of S870>8.5 mJy{S}_{870}\gt 8.5\,\mathrm{mJy} (or {L}_{\mathrm{FIR}}=6.7\times {10}^{12}\mbox{--}2.9\times {10}^{13}\,{L}_{\odot }) DSFGs, excluding our signposts, when comparing our number counts to those in "blank fields." Thus, we are 99.93% confident that our signposts are pinpointing overdense regions in the universe, and ≈95% [50%] confident that these regions are overdense by a factor of at least ≥1.5 × [2×]. Using template spectral energy distributions (SEDs) and SPIRE/LABOCA photometry, we derive a median photometric redshift of z = 3.2 ± 0.2 for our signposts, with an inter-quartile range of z = 2.8–3.6, somewhat higher than expected for ~850 μm selected galaxies. We constrain the DSFGs that are likely responsible for this overdensity to within ∣Δz∣⩽0.65| {\rm{\Delta }}z| \leqslant 0.65 of their respective signposts. These "associated" DSFGs are radially distributed within (physical) distances of 1.6 ± 0.5 Mpc from their signposts, have median star formation rates (SFRs) of ≈(1.0±0.2)×103 M⊙ yr−1\approx (1.0\pm 0.2)\times {10}^{3}\,{M}_{\odot }\,{\mathrm{yr}}^{-1} (for a Salpeter stellar inital mass function) and median gas reservoirs of ∼1.7×1011 M⊙\sim 1.7\times {10}^{11}\,{M}_{\odot }. These candidate protoclusters have average total SFRs of at least ≈(2.3±0.5)×103 M⊙ yr−1\approx (2.3\pm 0.5)\times {10}^{3}\,{M}_{\odot }\,{\mathrm{yr}}^{-1} and space densities of ~9 × 10−7 Mpc−3, consistent with the idea that their constituents may evolve to become massive early-type galaxies in the centers of the rich galaxy clusters we see today

    Perceptually weighted spatial resolution

    Get PDF

    Improved Diagnostics & Performance for Quantum Error Correction

    Get PDF
    Building large scale quantum computers is one of the most exciting ventures being pursued by researchers in the 21st century. However, the presence of noise in quantum systems poses a major hindrance towards this ambitious goal. Unlike the developmental history of classical computers where noise levels were brought under reasonable threshold levels early on, the field of quantum computing is struggling to do the same. Nonetheless, there have been many significant theoretical and experimental advancements in the past decade. Quantum error correction and fault tolerance in general is believed to be a reliable long term strategy to mitigate noise and perform arbitrarily long quantum computations. Optimizing and assessing the quality of components in fault-tolerance scheme is a crucial task. We address these tasks in this thesis. In the first part of the thesis, we provide a method to efficiently estimate the performance of a large class of codes called concatenated stabilizer codes. We show how to employ noise tailoring techniques developed for computations at the physical level to circuits protected by quantum error correction to enable this estimation. We also develop a metric called the logical estimator, which is an approximation of the logical infidelity of the code. We show that this metric can be used to guide the selection of the optimal (concatenated stabilizer) code and the optimal (lookup style) decoder for a given device. Moreover, the metric also aids in estimating the resource requirements for a target logical error rate efficiently and reliably. In the second part, we show how a combination of noise tailoring tools with quantum error correction can improve the performance of concatenated stabilizer codes by several orders of magnitude. These gains in turn bring down the resource overheads for quantum error correction. We explore the gains using concatenated Steane code under a wide variety of physically motivated error models including arbitrary rotations and combinations of coherent and stochastic noise. We also study the variation of gains with the number of levels of concatenation. For the simple case of rotations about a Pauli axis, we show that the gain scales doubly exponentially with the number of levels in the code. We analyze and show the presence of threshold rotation angles below which the gains can be arbitrarily magnified by increasing the number of levels in the code. The last part of the thesis explores the testing of an important property of error correcting codes - the minimum distance, often referred to as the distance. We operate in the regime of large classical binary linear codes described in terms of their parity check matrices. We are given access to these codes in terms of an oracle which when supplied an index, returns a single column of the parity check matrix corresponding to that index. We derive lower and upper bounds on the query complexity of finding the minimum distance of a given code. We also ask and (partially) answer the same question in the property testing framework. In particular, we provide a tester which queries a sub-linear number of columns of the parity check matrix and certifies whether a code has high distance or is far away from all codes which have high distance. We also provide non-trivial lower bounds for this task. Although this study is done for classical linear codes, it has implications for designing quantum codes which are built using classical codes. This part of the thesis defines the beginning of a significant area of interest encompassing efficiently testing important properties of classical and quantum codes
    • …
    corecore