154,444 research outputs found

    Error threshold in optimal coding, numerical criteria and classes of universalities for complexity

    Full text link
    The free energy of the Random Energy Model at the transition point between ferromagnetic and spin glass phases is calculated. At this point, equivalent to the decoding error threshold in optimal codes, free energy has finite size corrections proportional to the square root of the number of degrees. The response of the magnetization to the ferromagnetic couplings is maximal at the values of magnetization equal to half. We give several criteria of complexity and define different universality classes. According to our classification, at the lowest class of complexity are random graph, Markov Models and Hidden Markov Models. At the next level is Sherrington-Kirkpatrick spin glass, connected with neuron-network models. On a higher level are critical theories, spin glass phase of Random Energy Model, percolation, self organized criticality (SOC). The top level class involves HOT design, error threshold in optimal coding, language, and, maybe, financial market. Alive systems are also related with the last class. A concept of anti-resonance is suggested for the complex systems.Comment: 17 page

    Compressive sampling for accelerometer signals in structural health monitoring

    Get PDF
    In structural health monitoring (SHM) of civil structures, data compression is often needed to reduce the cost of data transfer and storage, because of the large volumes of sensor data generated from the monitoring system. The traditional framework for data compression is to first sample the full signal and, then to compress it. Recently, a new data compression method named compressive sampling (CS) that can acquire the data directly in compressed form by using special sensors has been presented. In this article, the potential of CS for data compression of vibration data is investigated using simulation of the CS sensor algorithm. For reconstruction of the signal, both wavelet and Fourier orthogonal bases are examined. The acceleration data collected from the SHM system of Shandong Binzhou Yellow River Highway Bridge is used to analyze the data compression ability of CS. For comparison, both the wavelet-based and Huffman coding methods are employed to compress the data. The results show that the values of compression ratios achieved using CS are not high, because the vibration data used in SHM of civil structures are not naturally sparse in the chosen bases

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    Bilayer Low-Density Parity-Check Codes for Decode-and-Forward in Relay Channels

    Full text link
    This paper describes an efficient implementation of binning for the relay channel using low-density parity-check (LDPC) codes. We devise bilayer LDPC codes to approach the theoretically promised rate of the decode-and-forward relaying strategy by incorporating relay-generated information bits in specially designed bilayer graphical code structures. While conventional LDPC codes are sensitively tuned to operate efficiently at a certain channel parameter, the proposed bilayer LDPC codes are capable of working at two different channel parameters and two different rates: that at the relay and at the destination. To analyze the performance of bilayer LDPC codes, bilayer density evolution is devised as an extension of the standard density evolution algorithm. Based on bilayer density evolution, a design methodology is developed for the bilayer codes in which the degree distribution is iteratively improved using linear programming. Further, in order to approach the theoretical decode-and-forward rate for a wide range of channel parameters, this paper proposes two different forms bilayer codes, the bilayer-expurgated and bilayer-lengthened codes. It is demonstrated that a properly designed bilayer LDPC code can achieve an asymptotic infinite-length threshold within 0.24 dB gap to the Shannon limits of two different channels simultaneously for a wide range of channel parameters. By practical code construction, finite-length bilayer codes are shown to be able to approach within a 0.6 dB gap to the theoretical decode-and-forward rate of the relay channel at a block length of 10510^5 and a bit-error probability (BER) of 10410^{-4}. Finally, it is demonstrated that a generalized version of the proposed bilayer code construction is applicable to relay networks with multiple relays.Comment: Submitted to IEEE Trans. Info. Theor

    Closed-loop estimation of retinal network sensitivity reveals signature of efficient coding

    Full text link
    According to the theory of efficient coding, sensory systems are adapted to represent natural scenes with high fidelity and at minimal metabolic cost. Testing this hypothesis for sensory structures performing non-linear computations on high dimensional stimuli is still an open challenge. Here we develop a method to characterize the sensitivity of the retinal network to perturbations of a stimulus. Using closed-loop experiments, we explore selectively the space of possible perturbations around a given stimulus. We then show that the response of the retinal population to these small perturbations can be described by a local linear model. Using this model, we computed the sensitivity of the neural response to arbitrary temporal perturbations of the stimulus, and found a peak in the sensitivity as a function of the frequency of the perturbations. Based on a minimal theory of sensory processing, we argue that this peak is set to maximize information transmission. Our approach is relevant to testing the efficient coding hypothesis locally in any context where no reliable encoding model is known
    corecore