39 research outputs found

    Multiterminal source coding: sum-rate loss, code designs, and applications to video sensor networks

    Get PDF
    Driven by a host of emerging applications (e.g., sensor networks and wireless video), distributed source coding (i.e., Slepian-Wolf coding, Wyner-Ziv coding and various other forms of multiterminal source coding), has recently become a very active research area. This dissertation focuses on multiterminal (MT) source coding problem, and consists of three parts. The first part studies the sum-rate loss of an important special case of quadratic Gaussian multi-terminal source coding, where all sources are positively symmetric and all target distortions are equal. We first give the minimum sum-rate for joint encoding of Gaussian sources in the symmetric case, and then show that the supremum of the sum-rate loss due to distributed encoding in this case is 1 2 log2 5 4 = 0:161 b/s when L = 2 and increases in the order of Âş L 2 log2 e b/s as the number of terminals L goes to infinity. The supremum sum-rate loss of 0:161 b/s in the symmetric case equals to that in general quadratic Gaussian two-terminal source coding without the symmetric assumption. It is conjectured that this equality holds for any number of terminals. In the second part, we present two practical MT coding schemes under the framework of Slepian-Wolf coded quantization (SWCQ) for both direct and indirect MT problems. The first, asymmetric SWCQ scheme relies on quantization and Wyner-Ziv coding, and it is implemented via source splitting to achieve any point on the sum-rate bound. In the second, conceptually simpler scheme, symmetric SWCQ, the two quantized sources are compressed using symmetric Slepian-Wolf coding via a channel code partitioning technique that is capable of achieving any point on the Slepian-Wolf sum-rate bound. Our practical designs employ trellis-coded quantization and turbo/LDPC codes for both asymmetric and symmetric Slepian-Wolf coding. Simulation results show a gap of only 0.139-0.194 bit per sample away from the sum-rate bound for both direct and indirect MT coding problems. The third part applies the above two MT coding schemes to two practical sources, i.e., stereo video sequences to save the sum rate over independent coding of both sequences. Experiments with both schemes on stereo video sequences using H.264, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give slightly smaller sum rate than separate H.264 coding of both sequences at the same video quality

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.Electrical, Electronic and Computer EngineeringMEngUnrestricte

    Temporal Lossy In-Situ Compression for Computational Fluid Dynamics Simulations

    Get PDF
    Während CFD Simulationen für Metallschmelze im Rahmen des SFB920 fallen auf dem Taurus HPC Cluster in Dresden sehr große Datenmengen an, deren Handhabung den wissenschaftlichen Arbeitsablauf stark verlangsamen. Zum einen ist der Transfer in Visualisierungssysteme nur unter hohem Zeitaufwand möglich. Zum anderen ist interaktive Analyse von zeitlich abhängigen Prozessen auf Grund des Speicherflaschenhalses nahezu unmöglich. Aus diesen Gründen beschäftigt sich die vorliegende Dissertation mit der Entwicklung sog. Temporaler In-Situ Kompression für wissenschaftliche Daten direkt innerhalb von CFD Simulationen. Dabei werden mittels neuer Quantisierungsverfahren die Daten auf ~10% komprimiert, wobei dekomprimierte Daten einen Fehler von maximal 1% aufweisen. Im Gegensatz zu nicht-temporaler Kompression, wird bei temporaler Kompression der Unterschied zwischen Zeitschritten komprimiert, um den Kompressionsgrad zu erhöhen. Da die Datenmenge um ein Vielfaches kleiner ist, werden Kosten für die Speicherung und die Übertragung gesenkt. Da Kompression, Transfer und Dekompression bis zu 4 mal schneller ablaufen als der Transfer von unkomprimierten Daten, wird der wissenschaftliche Arbeitsablauf beschleunigt

    ADAPTIVE CHANNEL AND SOURCE CODING USING APPROXIMATE INFERENCE

    Get PDF
    Channel coding and source coding are two important problems in communications. Although both channel coding and source coding (especially, the distributed source coding (DSC)) can achieve their ultimate performance by knowing the perfect knowledge of channel noise and source correlation, respectively, such information may not be always available at the decoder side. The reasons might be because of the time−varying characteristic of some communication systems and sources themselves, respectively. In this dissertation, I mainly focus on the study of online channel noise estimation and correlation estimation by using both stochastic and deterministic approximation inferences on factor graphs.In channel coding, belief propagation (BP) is a powerful algorithm to decode low−density parity check (LDPC) codes over additive white Gaussian noise (AWGN) channels. However, the traditional BP algorithm cannot adapt efficiently to the statistical change of SNR in an AWGN channel. To solve the problem, two common workarounds in approximate inference are stochastic methods (e.g. particle filtering (PF)) and deterministic methods (e.g. expectation approximation (EP)). Generally, deterministic methods are much faster than stochastic methods. In contrast, stochastic methods are more flexible and suitable for any distribution. In this dissertation, I proposed two adaptive LDPC decoding schemes, which are able to perform online estimation of time−varying channel state information (especially signal to noise ratio (SNR)) at the bit−level by incorporating PF and EP algorithms. Through experimental results, I compare the performance between the proposed PF based and EP based approaches, which shows that the EP based approach obtains the comparable estimation accuracy with less computational complexity than the PF based method for both stationary and time−varying SNR, and enhances the BP decoding performance simultaneously. Moreover, the EP estimator shows a very fast convergence speed, and the additional computational overhead of the proposed decoder is less than 10% of the standard BP decoder.Moreover, since the close relationship between source coding and channel coding, the proposed ideas are extended to source correlation estimation. First, I study the correlation estimation problem in lossless DSC setup, where I consider both asymmetric and non−asymmetric SW coding of two binary correlated sources. The aforementioned PF and EP based approaches are extended to handle the correlation between two binary sources, where the relationship is modeled as a virtual binary symmetric channel (BSC) with a time−varying crossover probability. Besides, to handle the correlation estimation problem of Wyner−Ziv (WZ) coding, a lossy DSC setup, I design a joint bit−plane model, by which the PF based approach can be applied to tracking the correlation between non−binary sources. Through experimental results, the proposed correlation estimation approaches significantly improve the compression performance of DSC.Finally, due to the property of ultra−low encoding complexity, DSC is a promising technique for many tasks, in which the encoder has only limited computing and communication power, e.g. the space imaging systems. In this dissertation, I consider a real−world application of the proposed correlation estimation scheme on the onboard low−complexity compression of solar stereo images, since such solutions are essential to reduce onboard storage, processing, and communication resources. In this dissertation, I propose an adaptive distributed compression solution using PF that tracks the correlation, as well as performs disparity estimation, at the decoder side. The proposed algorithm istested on the stereo solar images captured by the twin satellites systemof NASA’s STEREO project. The experimental results show the significant PSNR improvement over traditional separate bit−plane decoding without dynamic correlation and disparity estimation

    Dynamic information and constraints in source and channel coding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 237-251).This thesis explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such distortion side information is not only useful at the encoder but under certain conditions knowing it at the encoder is optimal and knowing it at the decoder is useless. Thus distortion side information is a natural complement to Wyner-Ziv side information and may be useful in exploiting properties of the human perceptual system as well as in sensor or control applications. In addition to developing the theoretical limits of source coding with distortion side information, we also construct practical quantizers based on lattices and codes on graphs. Our use of codes on graphs is also of independent interest since it highlights some issues in translating the success of turbo and LDPC codes into the realm of source coding. Finally, to explore the dynamics of side information correlated with the source, we consider fixed lag side information at the decoder. We focus on the special case of perfect side information with unit lag corresponding to source coding with feedforward (the dual of channel coding with feedback).(cont.) Using duality, we develop a linear complexity algorithm which exploits the feedforward information to achieve the rate-distortion bound. The second part of the thesis focuses on channel dynamics in communication by introducing a new system model to study delay in streaming applications. We first consider an adversarial channel model where at any time the channel may suffer a burst of degraded performance (e.g., due to signal fading, interference, or congestion) and prove a coding theorem for the minimum decoding delay required to recover from such a burst. Our coding theorem illustrates the relationship between the structure of a code, the dynamics of the channel, and the resulting decoding delay. We also consider more general channel dynamics. Specifically, we prove a coding theorem establishing that, for certain collections of channel ensembles, delay-universal codes exist that simultaneously achieve the best delay for any channel in the collection. Practical constructions with low encoding and decoding complexity are described for both cases.(cont.) Finally, we also consider architectures consisting of both source and channel coding which deal with channel dynamics by spreading information over space, frequency, multiple antennas, or alternate transmission paths in a network to avoid coding delays. Specifically, we explore whether the inherent diversity in such parallel channels should be exploited at the application layer via multiple description source coding, at the physical layer via parallel channel coding, or through some combination of joint source-channel coding. For on-off channel models application layer diversity architectures achieve better performance while for channels with a continuous range of reception quality (e.g., additive Gaussian noise channels with Rayleigh fading), the reverse is true. Joint source-channel coding achieves the best of both by performing as well as application layer diversity for on-off channels and as well as physical layer diversity for continuous channels.by Emin Martinian.Ph.D

    Design techniques for graph-based error-correcting codes and their applications

    Get PDF
    In ShannonÂs seminal paper, ÂA Mathematical Theory of CommunicationÂ, he defined ÂChannel Capacity which predicted the ultimate performance that transmission systems can achieve and suggested that capacity is achievable by error-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused by channels afterward. The discovery of turbo codes and rediscovery of Low Density Parity Check codes (LDPC) have revived the research in channel coding with novel ideas and techniques on code concatenation, iterative decoding, graph-based construction and design based on density evolution. This dissertation focuses on the design aspect of graph-based channel codes such as LDPC and Irregular Repeat Accumulate (IRA) codes via density evolution, and use the technique (density evolution) to design IRA codes for scalable image/video communication and LDPC codes for distributed source coding, which can be considered as a channel coding problem. The first part of the dissertation includes design and analysis of rate-compatible IRA codes for scalable image transmission systems. This part presents the analysis with density evolution the effect of puncturing applied to IRA codes and the asymptotic analysis of the performance of the systems. In the second part of the dissertation, we consider designing source-optimized IRA codes. The idea is to take advantage of the capability of Unequal Error Protection (UEP) of IRA codes against errors because of their irregularities. In video and image transmission systems, the performance is measured by Peak Signal to Noise Ratio (PSNR). We propose an approach to design IRA codes optimized for such a criterion. In the third part of the dissertation, we investigate Slepian-Wolf coding problem using LDPC codes. The problems to be addressed include coding problem involving multiple sources and non-binary sources, and coding using multi-level codes and nonbinary codes

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.TM2019Electrical, Electronic and Computer EngineeringMEngUnrestricte

    Joint source and channel coding

    Get PDF

    Quantization in acquisition and computation networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 151-165).In modern systems, it is often desirable to extract relevant information from large amounts of data collected at different spatial locations. Applications include sensor networks, wearable health-monitoring devices and a variety of other systems for inference. Several existing source coding techniques, such as Slepian-Wolf and Wyner-Ziv coding, achieve asymptotic compression optimality in distributed systems. However, these techniques are rarely used in sensor networks because of decoding complexity and prohibitively long code length. Moreover, the fundamental limits that arise from existing techniques are intractable to describe for a complicated network topology or when the objective of the system is to perform some computation on the data rather than to reproduce the data. This thesis bridges the technological gap between the needs of real-world systems and the optimistic bounds derived from asymptotic analysis. Specifically, we characterize fundamental trade-offs when the desired computation is incorporated into the compression design and the code length is one. To obtain both performance guarantees and achievable schemes, we use high-resolution quantization theory, which is complementary to the Shannon-theoretic analyses previously used to study distributed systems. We account for varied network topologies, such as those where sensors are allowed to collaborate or the communication links are heterogeneous. In these settings, a small amount of intersensor communication can provide a significant improvement in compression performance. As a result, this work suggests new compression principles and network design for modern distributed systems. Although the ideas in the thesis are motivated by current and future sensor network implementations, the framework applies to a wide range of signal processing questions. We draw connections between the fidelity criteria studied in the thesis and distortion measures used in perceptual coding. As a consequence, we determine the optimal quantizer for expected relative error (ERE), a measure that is widely useful but is often neglected in the source coding community. We further demonstrate that applying the ERE criterion to psychophysical models can explain the Weber-Fechner law, a longstanding hypothesis of how humans perceive the external world. Our results are consistent with the hypothesis that human perception is Bayesian optimal for information acquisition conditioned on limited cognitive resources, thereby supporting the notion that the brain is efficient at acquisition and adaptation.by John Z. Sun.Ph.D

    Sensor Data Integrity Verification for Real-time and Resource Constrained Systems

    Full text link
    Sensors are used in multiple applications that touch our lives and have become an integral part of modern life. They are used in building intelligent control systems in various industries like healthcare, transportation, consumer electronics, military, etc. Many mission-critical applications require sensor data to be secure and authentic. Sensor data security can be achieved using traditional solutions like cryptography and digital signatures, but these techniques are computationally intensive and cannot be easily applied to resource constrained systems. Low complexity data hiding techniques, on the contrary, are easy to implement and do not need substantial processing power or memory. In this applied research, we use and configure the established low complexity data hiding techniques from the multimedia forensics domain. These techniques are used to secure the sensor data transmissions in resource constrained and real-time environments such as an autonomous vehicle. We identify the areas in an autonomous vehicle that require sensor data integrity and propose suitable water-marking techniques to verify the integrity of the data and evaluate the performance of the proposed method against different attack vectors. In our proposed method, sensor data is embedded with application specific metadata and this process introduces some distortion. We analyze this embedding induced distortion and its impact on the overall sensor data quality to conclude that watermarking techniques, when properly configured, can solve sensor data integrity verification problems in an autonomous vehicle.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167387/3/Raghavendar Changalvala Final Dissertation.pdfDescription of Raghavendar Changalvala Final Dissertation.pdf : Dissertatio
    corecore