205 research outputs found

    Low complexity lossless compression of underwater sound recordings

    Get PDF
    Author Posting. © Acoustical Society of America, 2013. This article is posted here by permission of Acoustical Society of America for personal use, not for redistribution. The definitive version was published in Journal of the Acoustical Society of America 133 (2013): 1387-1398, doi:10.1121/1.4776206.Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16–240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.Algorithm development was supported by SERDP, ONR, US Navy (N45) and NOPP. M.J. was supported by the Marine Alliance for Science and Technology Scotland (MASTS)

    High-Performance Lossless Compression of Hyperspectral Remote Sensing Scenes Based on Spectral Decorrelation

    Get PDF
    The capacity of the downlink channel is a major bottleneck for applications based on remotesensing hyperspectral imagery (HSI). Data compression is an essential tool to maximize the amountof HSI scenes that can be retrieved on the ground. At the same time, energy and hardware constraintsof spaceborne devices impose limitations on the complexity of practical compression algorithms.To avoid any distortion in the analysis of the HSI data, only lossless compression is considered in thisstudy. This work aims at finding the most advantageous compression-complexity trade-off withinthe state of the art in HSI compression. To do so, a novel comparison of the most competitive spectraldecorrelation approaches combined with the best performing low-complexity compressors of thestate is presented. Compression performance and execution time results are obtained for a set of47 HSI scenes produced by 14 different sensors in real remote sensing missions. Assuming onlya limited amount of energy is available, obtained data suggest that the FAPEC algorithm yields thebest trade-off. When compared to the CCSDS 123.0-B-2 standard, FAPEC is 5.0 times faster andits compressed data rates are on average within 16% of the CCSDS standard. In scenarios whereenergy constraints can be relaxed, CCSDS 123.0-B-2 yields the best average compression results of allevaluated methods

    Contributions to Medical Image Segmentation and Signal Analysis Utilizing Model Selection Methods

    Get PDF
    This thesis presents contributions to model selection techniques, especially based on information theoretic criteria, with the goal of solving problems appearing in signal analysis and in medical image representation, segmentation, and compression.The field of medical image segmentation is wide and is quickly developing to make use of higher available computational power. This thesis concentrates on several applications that allow the utilization of parametric models for image and signal representation. One important application is cell nuclei segmentation from histological images. We model nuclei contours by ellipses and thus the complicated problem of separating overlapping nuclei can be rephrased as a model selection problem, where the number of nuclei, their shapes, and their locations define one segmentation. In this thesis, we present methods for model selection in this parametric setting, where the intuitive algorithms are combined with more principled ones, namely those based on the minimum description length (MDL) principle. The results of the introduced unsupervised segmentation algorithm are compared with human subject segmentations, and are also evaluated with the help of a pathology expert.Another considered medical image application is lossless compression. The objective has been to add the task of image segmentation to that of image compression such that the image regions can be transmitted separately, depending on the region of interest for diagnosis. The experiments performed on retinal color images show that our modeling, in which the MDL criterion selects the structure of the linear predictive models, outperforms publicly available image compressors such as the lossless version of JPEG 2000.For time series modeling, the thesis presents an algorithm which allows detection of changes in time series signals. The algorithm is based on one of the most recent implementations of the MDL principle, the sequentially normalized maximum likelihood (SNML) models.This thesis produces contributions in the form of new methods and algorithms, where the simplicity of information theoretic principles are combined with a rather complex and problem dependent modeling formulation, resulting in both heuristically motivated and principled algorithmic solutions

    Compression Methods for Structured Floating-Point Data and their Application in Climate Research

    Get PDF
    The use of new technologies, such as GPU boosters, have led to a dramatic increase in the computing power of High-Performance Computing (HPC) centres. This development, coupled with new climate models that can better utilise this computing power thanks to software development and internal design, led to the bottleneck moving from solving the differential equations describing Earth’s atmospheric interactions to actually storing the variables. The current approach to solving the storage problem is inadequate: either the number of variables to be stored is limited or the temporal resolution of the output is reduced. If it is subsequently determined that another vari- able is required which has not been saved, the simulation must run again. This thesis deals with the development of novel compression algorithms for structured floating-point data such as climate data so that they can be stored in full resolution. Compression is performed by decorrelation and subsequent coding of the data. The decorrelation step eliminates redundant information in the data. During coding, the actual compression takes place and the data is written to disk. A lossy compression algorithm additionally has an approx- imation step to unify the data for better coding. The approximation step reduces the complexity of the data for the subsequent coding, e.g. by using quantification. This work makes a new scientific contribution to each of the three steps described above. This thesis presents a novel lossy compression method for time-series data using an Auto Regressive Integrated Moving Average (ARIMA) model to decorrelate the data. In addition, the concept of information spaces and contexts is presented to use information across dimensions for decorrela- tion. Furthermore, a new coding scheme is described which reduces the weaknesses of the eXclusive-OR (XOR) difference calculation and achieves a better compression factor than current lossless compression methods for floating-point numbers. Finally, a modular framework is introduced that allows the creation of user-defined compression algorithms. The experiments presented in this thesis show that it is possible to in- crease the information content of lossily compressed time-series data by applying an adaptive compression technique which preserves selected data with higher precision. An analysis for lossless compression of these time- series has shown no success. However, the lossy ARIMA compression model proposed here is able to capture all relevant information. The reconstructed data can reproduce the time-series to such an extent that statistically rele- vant information for the description of climate dynamics is preserved. Experiments indicate that there is a significant dependence of the com- pression factor on the selected traversal sequence and the underlying data model. The influence of these structural dependencies on prediction-based compression methods is investigated in this thesis. For this purpose, the concept of Information Spaces (IS) is introduced. IS contributes to improv- ing the predictions of the individual predictors by nearly 10% on average. Perhaps more importantly, the standard deviation of compression results is on average 20% lower. Using IS provides better predictions and consistent compression results. Furthermore, it is shown that shifting the prediction and true value leads to a better compression factor with minimal additional computational costs. This allows the use of more resource-efficient prediction algorithms to achieve the same or better compression factor or higher throughput during compression or decompression. The coding scheme proposed here achieves a better compression factor than current state-of-the-art methods. Finally, this paper presents a modular framework for the development of compression algorithms. The framework supports the creation of user- defined predictors and offers functionalities such as the execution of bench- marks, the random subdivision of n-dimensional data, the quality evalua- tion of predictors, the creation of ensemble predictors and the execution of validity tests for sequential and parallel compression algorithms. This research was initiated because of the needs of climate science, but the application of its contributions is not limited to it. The results of this the- sis are of major benefit to develop and improve any compression algorithm for structured floating-point data

    Depth-Map Image Compression Based on Region and Contour Modeling

    Get PDF
    In this thesis, the problem of depth-map image compression is treated. The compilation of articles included in the thesis provides methodological contributions in the fields of lossless and lossy compression of depth-map images.The first group of methods addresses the lossless compression problem. The introduced methods are using the approach of representing the depth-map image in terms of regions and contours. In the depth-map image, a segmentation defines the regions, by grouping pixels having similar properties, and separates them using (region) contours. The depth-map image is encoded by the contours and the auxiliary information needed to reconstruct the depth values in each region.One way of encoding the contours is to describe them using two matrices of horizontal and vertical contour edges. The matrices are encoded using template context coding where each context tree is optimally pruned. In certain contexts, the contour edges are found deterministically using only the currently available information. Another way of encoding the contours is to describe them as a sequence of contour segments. Each such segment is defined by an anchor (starting) point and a string of contour edges, equivalent to a string of chain-code symbols. Here we propose efficient ways to select and encode the anchor points and to generate contour segments by using a contour crossing point analysis and by imposing rules that help in minimizing the number of anchor points.The regions are reconstructed at the decoder using predictive coding or the piecewise constant model representation. In the first approach, the large constant regions are found and one depth value is encoded for each such region. For the rest of the image, suitable regions are generated by constraining the local variation of the depth level from one pixel to another. The nonlinear predictors selected specifically for each region are combining the results of several linear predictors, each fitting optimally a subset of pixels belonging to the local neighborhood. In the second approach, the depth value of a given region is encoded using the depth values of the neighboring regions already encoded. The natural smoothness of the depth variation and the mutual exclusiveness of the values in neighboring regions are exploited to efficiently predict and encode the current region's depth value.The second group of methods is studying the lossy compression problem. In a first contribution, different segmentations are generated by varying the threshold for the depth local variability. A lossy depth-map image is obtained for each segmentation and is encoded based on predictive coding, quantization and context tree coding. In another contribution, the lossy versions of one image are created either by successively merging the constant regions of the original image, or by iteratively splitting the regions of a template image using horizontal or vertical line segments. Merging and splitting decisions are greedily taken, according to the best slope towards the next point in the rate-distortion curve. An entropy coding algorithm is used to encode each image.We propose also a progressive coding method for coding the sequence of lossy versions of a depth-map image. The bitstream is encoded so that any lossy version of the original image is generated, starting from a very low resolution up to lossless reconstruction. The partitions of the lossy versions into regions are assumed to be nested so that a higher resolution image is obtained by splitting some regions of a lower resolution image. A current image in the sequence is encoded using the a priori information from a previously encoded image: the anchor points are encoded relative to the already encoded contour points; the depth information of the newly resulting regions is recovered using the depth value of the parent region.As a final contribution, the dissertation includes a study of the parameterization of planar models. The quantized heights at three-pixel locations are used to compute the optimal plane for each region. The three-pixel locations are selected so that the distortion due to the approximation of the plane over the region is minimized. The planar model and the piecewise constant model are competing in the merging process, where the two regions to be merged are those ensuring the optimal slope in the rate-distortion curve

    Comparison of lossless compression schemes for high rate electrical grid time series for smart grid monitoring and analysis

    Get PDF
    The smart power grid of the future will utilize waveform level monitoring with sampling rates in the kilohertz range for detailed grid status assessment. To this end, we address the challenge of handling large raw data amount with its quasi-periodical characteristic via lossless compression. We compare different freely available algorithms and implementations with regard to compression ratio, computation time and working principle to find the most suitable compression strategy for this type of data. Algorithms from the audio domain (ALAC, ALS, APE, FLAC & TrueAudio) and general archiving schemes (LZMA, Delfate, PPMd, BZip2 & Gzip) are tested against each other. We assemble a dataset from openly available sources (UK-DALE, MIT-REDD, EDR) and establish dataset independent comparison criteria. This combination is a first detailed open benchmark to support the development of tailored lossless compression schemes and a decision support for researchers facing data intensive smart grid measurement

    Serial Neuropsychological Assessment Toward a Reliable Concussion Protocol

    Full text link
    With more than 10,000 Sports Related Concussions (SRCs) per year at the collegiate level, interdisciplinary teams are often tasked with determining when an athlete may return to activity (Zuckerman et al., 2015). Due to neurochemical changes following an SRC, athletes are vulnerable to further injury if they suffer another head injury before given appropriate time to heal (Giza & Hovda, 2014). Cognitive testing is routinely utilized to detect the presence of cognitive dysfunction and aid in individualized treatment planning. Because athletes often demonstrate practice effects when retested, it is difficult to distinguish if the athlete is demonstrating cognitive dysfunction. Reliable Change Indices (RCIs) provide a systematic framework for interpreting the change in an individual’s scores over time. The present study sought to develop RCIs with a brief battery of pencil paper tests within the cognitive domains most impacted by SRC. Results indicated significant increases in test scores across various tests due to practice effect. Additionally, reliability coefficients varied significantly across tests, ranging from low to excellent. Reliable Change Indices were calculated and recorded below. Findings indicate the utility of many of the tests administered and provide context to more accurately interpret follow-up testing scores

    Técnicas de compresión de imágenes hiperespectrales sobre hardware reconfigurable

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, leída el 18-12-2020Sensors are nowadays in all aspects of human life. When possible, sensors are used remotely. This is less intrusive, avoids interferces in the measuring process, and more convenient for the scientist. One of the most recurrent concerns in the last decades has been sustainability of the planet, and how the changes it is facing can be monitored. Remote sensing of the earth has seen an explosion in activity, with satellites now being launched on a weekly basis to perform remote analysis of the earth, and planes surveying vast areas for closer analysis...Los sensores aparecen hoy en día en todos los aspectos de nuestra vida. Cuando es posible, de manera remota. Esto es menos intrusivo, evita interferencias en el proceso de medida, y además facilita el trabajo científico. Una de las preocupaciones recurrentes en las últimas décadas ha sido la sotenibilidad del planeta, y cómo menitoirzar los cambios a los que se enfrenta. Los estudios remotos de la tierra han visto un gran crecimiento, con satélites lanzados semanalmente para analizar la superficie, y aviones sobrevolando grades áreas para análisis más precisos...Fac. de InformáticaTRUEunpu

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
    corecore