182,756 research outputs found

    Broadcast system source codes: a new paradigm for data compression

    Get PDF
    Broadcast systems play a central role in an enormous variety of network technologies in which one system node must simultaneously send either the same or different information to multiple nodes in the network. Systems incorporating broadcast components include such diverse technologies as wireless communications systems, web servers, distributed computing devices, and video conferencing systems. Currently, the compression algorithms (or source codes) employed in these devices fail to take advantage of the characteristics specific to broadcast systems. Instead, they treat a single node transmitting information to a collection of receivers as a collection of single-transmitter single-receiver communications problems and employ an independent source code on each. This approach is convenient, since it allows direct application of traditional compression techniques in a wide variety of broadcast system applications. Nonetheless, we here argue that the approach is inherently flawed. Our innovation in this paper is to treat the general broadcast system (with an arbitrary number of receivers and both specific and common information) as an inseparable whole and consider the resulting source coding ramifications. The result is a new paradigm for data compression on general broadcast systems. In this work, we describe this broadcast system source coding paradigm and examine the potential gains achievable by moving away from more conventional methods

    Improved Modeling of the Correlation Between Continuous-Valued Sources in LDPC-Based DSC

    Full text link
    Accurate modeling of the correlation between the sources plays a crucial role in the efficiency of distributed source coding (DSC) systems. This correlation is commonly modeled in the binary domain by using a single binary symmetric channel (BSC), both for binary and continuous-valued sources. We show that "one" BSC cannot accurately capture the correlation between continuous-valued sources; a more accurate model requires "multiple" BSCs, as many as the number of bits used to represent each sample. We incorporate this new model into the DSC system that uses low-density parity-check (LDPC) codes for compression. The standard Slepian-Wolf LDPC decoder requires a slight modification so that the parameters of all BSCs are integrated in the log-likelihood ratios (LLRs). Further, using an interleaver the data belonging to different bit-planes are shuffled to introduce randomness in the binary domain. The new system has the same complexity and delay as the standard one. Simulation results prove the effectiveness of the proposed model and system.Comment: 5 Pages, 4 figures; presented at the Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, November 201

    LDMIC: Learning-based Distributed Multi-view Image Coding

    Full text link
    Multi-view image compression plays a critical role in 3D-related applications. Existing methods adopt a predictive coding architecture, which requires joint encoding to compress the corresponding disparity as well as residual information. This demands collaboration among cameras and enforces the epipolar geometric constraint between different views, which makes it challenging to deploy these methods in distributed camera systems with randomly overlapping fields of view. Meanwhile, distributed source coding theory indicates that efficient data compression of correlated sources can be achieved by independent encoding and joint decoding, which motivates us to design a learning-based distributed multi-view image coding (LDMIC) framework. With independent encoders, LDMIC introduces a simple yet effective joint context transfer module based on the cross-attention mechanism at the decoder to effectively capture the global inter-view correlations, which is insensitive to the geometric relationships between images. Experimental results show that LDMIC significantly outperforms both traditional and learning-based MIC methods while enjoying fast encoding speed. Code will be released at https://github.com/Xinjie-Q/LDMIC.Comment: Accepted by ICLR 202

    Neural Mechanisms for Information Compression by Multiple Alignment, Unification and Search

    Get PDF
    This article describes how an abstract framework for perception and cognition may be realised in terms of neural mechanisms and neural processing. This framework — called information compression by multiple alignment, unification and search (ICMAUS) — has been developed in previous research as a generalized model of any system for processing information, either natural or artificial. It has a range of applications including the analysis and production of natural language, unsupervised inductive learning, recognition of objects and patterns, probabilistic reasoning, and others. The proposals in this article may be seen as an extension and development of Hebb’s (1949) concept of a ‘cell assembly’. The article describes how the concept of ‘pattern’ in the ICMAUS framework may be mapped onto a version of the cell assembly concept and the way in which neural mechanisms may achieve the effect of ‘multiple alignment’ in the ICMAUS framework. By contrast with the Hebbian concept of a cell assembly, it is proposed here that any one neuron can belong in one assembly and only one assembly. A key feature of present proposals, which is not part of the Hebbian concept, is that any cell assembly may contain ‘references’ or ‘codes’ that serve to identify one or more other cell assemblies. This mechanism allows information to be stored in a compressed form, it provides a robust mechanism by which assemblies may be connected to form hierarchies and other kinds of structure, it means that assemblies can express abstract concepts, and it provides solutions to some of the other problems associated with cell assemblies. Drawing on insights derived from the ICMAUS framework, the article also describes how learning may be achieved with neural mechanisms. This concept of learning is significantly different from the Hebbian concept and appears to provide a better account of what we know about human learning

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    Yielding and irreversible deformation below the microscale: Surface effects and non-mean-field plastic avalanches

    Get PDF
    Nanoindentation techniques recently developed to measure the mechanical response of crystals under external loading conditions reveal new phenomena upon decreasing sample size below the microscale. At small length scales, material resistance to irreversible deformation depends on sample morphology. Here we study the mechanisms of yield and plastic flow in inherently small crystals under uniaxial compression. Discrete structural rearrangements emerge as series of abrupt discontinuities in stress-strain curves. We obtain the theoretical dependence of the yield stress on system size and geometry and elucidate the statistical properties of plastic deformation at such scales. Our results show that the absence of dislocation storage leads to crucial effects on the statistics of plastic events, ultimately affecting the universal scaling behavior observed at larger scales.Comment: Supporting Videos available at http://dx.plos.org/10.1371/journal.pone.002041
    • …
    corecore