53 research outputs found

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Técnicas de compresión de imágenes hiperespectrales sobre hardware reconfigurable

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, leída el 18-12-2020Sensors are nowadays in all aspects of human life. When possible, sensors are used remotely. This is less intrusive, avoids interferces in the measuring process, and more convenient for the scientist. One of the most recurrent concerns in the last decades has been sustainability of the planet, and how the changes it is facing can be monitored. Remote sensing of the earth has seen an explosion in activity, with satellites now being launched on a weekly basis to perform remote analysis of the earth, and planes surveying vast areas for closer analysis...Los sensores aparecen hoy en día en todos los aspectos de nuestra vida. Cuando es posible, de manera remota. Esto es menos intrusivo, evita interferencias en el proceso de medida, y además facilita el trabajo científico. Una de las preocupaciones recurrentes en las últimas décadas ha sido la sotenibilidad del planeta, y cómo menitoirzar los cambios a los que se enfrenta. Los estudios remotos de la tierra han visto un gran crecimiento, con satélites lanzados semanalmente para analizar la superficie, y aviones sobrevolando grades áreas para análisis más precisos...Fac. de InformáticaTRUEunpu

    Lossless compression of satellite multispectral and hyperspectral images

    Get PDF
    En esta tesis se presentan nuevas técnicas de compresión sin pérdida tendientes a reducir el espacio de almacenamiento requerido por imágenes satelitales. Dos tipos principales de imágenes son tratadas: multiespectrales e hiperespectrales. En el caso de imágenes multiespectrales, se desarrolló un compresor no lineal que explota tanto las correlaciones intra como interbanda presentes en la imagen. Este se basa en la transformada wavelet de enteros a enteros y se aplica sobre bloques no solapados de la imagen. Diferentes modelos para las dependencias estadísticas de los coeficientes de detalle de la transformada wavelet son propuestos y analizados. Aquellos coeficientes que se encuentran en las subbandas de detalle fino de la transformada son modelados como una combinación afín de coeficientes vecinos y coeficientes en bandas adyacentes, sujetos a que se encuentren en la misma clase. Este modelo se utiliza para generar predicciones de otros coficientes que ya fueron codificados. La información de clase se genera mediante la cuantización LloydMax, la cual también se utiliza para predecir y como contextos de condicionamiento para codificar los errores de predicción con un codificador aritmético adaptativo. Dado que el ordenamiento de las bandas también afecta la precisión de las predicciones, un nuevo mecanismo de ordenamiento es propuesto basado en los coeficientes de detalle de los últimos dos niveles de la transformada wavelet. Los resultados obtenidos superan a los de otros compresores 2D sin pérdida como PNG, JPEG-LS, SPIHT y JPEG2000, como también a otros compresores 3D como SLSQ-OPT, JPEG-LS diferencial y JPEG2000 para imágenes a color y 3D-SPIHT. El método propuesto provee acceso aleatorio a partes de la imagen, y puede aplicarse para la compresión sin pérdida de otros datos volumétricos. Para las imágenes hiperespectrales, algoritmos como LUT o LAIS-LUT que revisten el estado del arte para la compresión sin pérdida para este tipo de imágenes, explotan la alta correlación espectral de estas imágenes y utilizan tablas de lookup para generar predicciones. A pesar de ello, existen casos donde las predicciones no son buenas. En esta tesis, se propone una modificación a estos algoritmos de lookup permitiendo diferentes niveles de confianza a las tablas de lookup en base a las variaciones locales del factor de escala. Los resultados obtenidos son altamente satisfactorios y mejores a los de LUT y LAIS-LUT. Se han diseñado dos compresores sin pérdida para dos tipos de imágenes satelitales, las cuales tienen distintas propiedades, a saber, diferente resolución espectral, espacial y radiométrica, y también de diferentes correlaciones espectrales y espaciales. En cada caso, el compresor explota estas propiedades para incrementar las tasas de compresión.In this thesis, new lossless compression techniques aiming at reducing the size of storage of satellite images are presented. Two type of images are considered: multispectral and hyperspectral. For multispectral images, a nonlinear lossless compressor that exploits both intraband and interband correlations is developed. The compressor is based on a wavelet transform that maps integers into integers, applied to tiles of the image. Different models for statistical dependencies of wavelet detail coefficients are proposed and analyzed. Wavelet coefficients belonging to the fine detail subbands are successfully modelled as an affine combination of neighboring coefficients and the coefficient at the same location in the previous band, as long as all these coefficients belong to the same landscape. This model is used to predict wavelet coefficients by means of already coded coefficients. Lloyd-Max quantization is used to extract class information, which is used in the prediction and later used as a conditioning context to encode prediction errors with an adaptive arithmetic coder. The band order affects the accuracy of predictions: a new mechanism is proposed for ordering the bands, based on the wavelet detail coefficients of the 2 finest levels. The results obtained outperform 2D lossless compressors such as PNG, JPEG-LS, SPIHT and JPEG2000 and other 3D lossless compressors such as SLSQ-OPT, differential JPEG-LS, JPEG2000 for color images and 3D-SPIHT. Our method has random access capability, and can be applied for lossless compression of other kinds of volumetric data. For hyperspectral images, state-of-the-art algorithms LUT and LAIS-LUT proposed for lossless compression, exploit high spectral correlations in these images, and use lookup tables to perform predictions. However, there are cases where their predictions are not accurate. In this thesis a modification based also on look-up tables is proposed, giving these tables different degrees of confidence, based on the local variations of the scaling factor. Our results are highly satisfactory and outperform both LUT and LAIS-LUT methods. Two lossless compressors have been designed for two different kinds of satellite images having different properties, namely, different spectral resolution, spatial resolution, and bitdepth, as well as different spectral and spatial correlations. In each case, the compressor exploits these properties to increase compression ratios.Fil:Acevedo, Daniel. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina

    Compresión sin pérdida de imágenes satelitales multiespectrales e hiperespectrales

    Get PDF
    En esta tesis se presentan nuevas técnicas de compresión sin pérdida tendientes a reducir el espacio de almacenamiento requerido por imágenes satelitales. Dos tipos principales de imágenes son tratadas: multiespectrales e hiperespectrales. En el caso de imágenes multiespectrales, se desarrolló un compresor no lineal que explota tanto las correlaciones intra como interbanda presentes en la imagen. Este se basa en la transformada wavelet de enteros a enteros y se aplica sobre bloques no solapados de la imagen. Diferentes modelos para las dependencias estadísticas de los coeficientes de detalle de la transformada wavelet son propuestos y analizados. Aquellos coeficientes que se encuentran en las subbandas de detalle fino de la transformada son modelados como una combinación afín de coeficientes vecinos y coeficientes en bandas adyacentes, sujetos a que se encuentren en la misma clase. Este modelo se utiliza para generar predicciones de otros coficientes que ya fueron codificados. La información de clase se genera mediante la cuantización LloydMax, la cual también se utiliza para predecir y como contextos de condicionamiento para codificar los errores de predicción con un codificador aritmético adaptativo. Dado que el ordenamiento de las bandas también afecta la precisión de las predicciones, un nuevo mecanismo de ordenamiento es propuesto basado en los coeficientes de detalle de los últimos dos niveles de la transformada wavelet. Los resultados obtenidos superan a los de otros compresores 2D sin pérdida como PNG, JPEG-LS, SPIHT y JPEG2000, como también a otros compresores 3D como SLSQ-OPT, JPEG-LS diferencial y JPEG2000 para imágenes a color y 3D-SPIHT. El método propuesto provee acceso aleatorio a partes de la imagen, y puede aplicarse para la compresión sin pérdida de otros datos volumétricos. Para las imágenes hiperespectrales, algoritmos como LUT o LAIS-LUT que revisten el estado del arte para la compresión sin pérdida para este tipo de imágenes, explotan la alta correlación espectral de estas imágenes y utilizan tablas de lookup para generar predicciones. A pesar de ello, existen casos donde las predicciones no son buenas. En esta tesis, se propone una modificación a estos algoritmos de lookup permitiendo diferentes niveles de confianza a las tablas de lookup en base a las variaciones locales del factor de escala. Los resultados obtenidos son altamente satisfactorios y mejores a los de LUT y LAIS-LUT. Se han diseñado dos compresores sin pérdida para dos tipos de imágenes satelitales, las cuales tienen distintas propiedades, a saber, diferente resolución espectral, espacial y radiométrica, y también de diferentes correlaciones espectrales y espaciales. En cada caso, el compresor explota estas propiedades para incrementar las tasas de compresión

    Diseño, implementación y optimización del sistema de compresión de imágenes sobre el ordenador de a bordo del proyecto de nanosátelite Eye-Sat

    Get PDF
    Eye-Sat es un Proyecto de nano satélites, dirigido por el CNES (Centre National d’Etudes Spatiales) y desarrollado principalmente por estudiantes de varias escuelas de ingeniería del territorio francés. El objetivo de este pequeño telescopio no solo radica en la oportunidad de realizar la demostración de distintos dispositivos tecnológicos, sino que también tiene como misión la adquisición de fotografías en la bandas de color e infrarrojo de la vía Láctea, así como el estudio de la intensidad y polarización de la luz Zodiacal. Los requerimientos de la misión exigen el desarrollo de un algoritmo de compresión de imágenes sin pérdidas para las imágenes “Color Filter Array” CFA (Bayer) e infrarrojas adquiridas por el satélite. Como miembro de la comisión consultativa para los sistemas espaciales, CNES ha seleccionado el estándar CCSDS-123.0-B como algoritmo base para cumplir los requerimientos de la misión. A este algoritmo se le añadirán modificaciones o mejoras, adaptadas a las imágenes tipo, con el fin de mejorar las prestaciones de compresión y de complejidad. La implementación y la optimización del algoritmo será desarrollada sobre la plataforma Xilinx Zynq® All Programmable SoC, el cual incluye una FPGA y un Dual-core ARM® Cortex™-A9 processor with NEONTM DSP/FPU Engine

    Techniques of design optimisation for algorithms implemented in software

    Get PDF
    The overarching objective of this thesis was to develop tools for parallelising, optimising, and implementing algorithms on parallel architectures, in particular General Purpose Graphics Processors (GPGPUs). Two projects were chosen from different application areas in which GPGPUs are used: a defence application involving image compression, and a modelling application in bioinformatics (computational immunology). Each project had its own specific objectives, as well as supporting the overall research goal. The defence / image compression project was carried out in collaboration with the Jet Propulsion Laboratories. The specific questions were: to what extent an algorithm designed for bit-serial for the lossless compression of hyperspectral images on-board unmanned vehicles (UAVs) in hardware could be parallelised, whether GPGPUs could be used to implement that algorithm, and whether a software implementation with or without GPGPU acceleration could match the throughput of a dedicated hardware (FPGA) implementation. The dependencies within the algorithm were analysed, and the algorithm parallelised. The algorithm was implemented in software for GPGPU, and optimised. During the optimisation process, profiling revealed less than optimal device utilisation, but no further optimisations resulted in an improvement in speed. The design had hit a local-maximum of performance. Analysis of the arithmetic intensity and data-flow exposed flaws in the standard optimisation metric of kernel occupancy used for GPU optimisation. Redesigning the implementation with revised criteria (fused kernels, lower occupancy, and greater data locality) led to a new implementation with 10x higher throughput. GPGPUs were shown to be viable for on-board implementation of the CCSDS lossless hyperspectral image compression algorithm, exceeding the performance of the hardware reference implementation, and providing sufficient throughput for the next generation of image sensor as well. The second project was carried out in collaboration with biologists at the University of Arizona and involved modelling a complex biological system – VDJ recombination involved in the formation of T-cell receptors (TCRs). Generation of immune receptors (T cell receptor and antibodies) by VDJ recombination is an enormously complex process, which can theoretically synthesize greater than 1018 variants. Originally thought to be a random process, the underlying mechanisms clearly have a non-random nature that preferentially creates a small subset of immune receptors in many individuals. Understanding this bias is a longstanding problem in the field of immunology. Modelling the process of VDJ recombination to determine the number of ways each immune receptor can be synthesized, previously thought to be untenable, is a key first step in determining how this special population is made. The computational tools developed in this thesis have allowed immunologists for the first time to comprehensively test and invalidate a longstanding theory (convergent recombination) for how this special population is created, while generating the data needed to develop novel hypothesis

    NASA Tech Briefs, January 2014

    Get PDF
    Topics include: Multi-Source Autonomous Response for Targeting and Monitoring of Volcanic Activity; Software Suite to Support In-Flight Characterization of Remote Sensing Systems; Visual Image Sensor Organ Replacement; Ultra-Wideband, Dual-Polarized, Beam-Steering P-Band Array Antenna; Centering a DDR Strobe in the Middle of a Data Packet; Using a Commercial Ethernet PHY Device in a Radiation Environment; Submerged AUV Charging Station; Habitat Demonstration Unit (HDU) Vertical Cylinder Habitat; Origami-Inspired Folding of Thick, Rigid Panels; A Novel Protocol for Decoating and Permeabilizing Bacterial Spores for Epifluorescent Microscopy; Method and Apparatus for Automated Isolation of Nucleic Acids from Small Cell Samples; Enabling Microliquid Chromatography by Microbead Packing of Microchannels; On-Command Force and Torque Impeding Devices (OC-FTID) Using ERF; Deployable Fresnel Rings; Transition-Edge Hot-Electron Microbolometers for Millimeter and Submillimeter Astrophysics; Spacecraft Trajectory Analysis and Mission Planning Simulation (STAMPS) Software; Cross Support Transfer Service (CSTS) Framework Library; Arbitrary Shape Deformation in CFD Design; Range Safety Flight Elevation Limit Calculation Software; Frequency-Modulated, Continuous-Wave Laser Ranging Using Photon-Counting Detectors; Calculation of Operations Efficiency Factors for Mars Surface Missions; GPU Lossless Hyperspectral Data Compression System; Robust, Optimal Subsonic Airfoil Shapes; Protograph-Based Raptor-Like Codes; Fuzzy Neuron: Method and Hardware Realization; Kalman Filter Input Processor for Boresight Calibration; Organizing Compression of Hyperspectral Imagery to Allow Efficient Parallel Decompression; and Temperature Dependences of Mechanisms Responsible for the Water-Vapor Continuum Absorption

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling

    Digital FPGA Circuits Design for Real-Time Video Processing with Reference to Two Application Scenarios

    Get PDF
    In the present days of digital revolution, image and/or video processing has become a ubiquitous task: from mobile devices to special environments, the need for a real-time approach is everyday more and more evident. Whatever the reason, either for user experience in recreational or internet-based applications or for safety related timeliness in hard-real-time scenarios, the exploration of technologies and techniques which allow for this requirement to be satisfied is a crucial point. General purpose CPU or GPU software implementations of these applications are quite simple and widespread, but commonly do not allow high performance because of the high layering that separates high level languages and libraries, which enforce complicated procedures and algorithms, from the base architecture of the CPUs that offers only limited and basic (although rapidly executed) arithmetic operations. The most practised approach nowadays is based on the use of Very-Large-Scale Integrated (VLSI) digital electronic circuits. Field Programmable Gate Arrays (FPGAs) are integrated digital circuits designed to be configured after manufacturing, "on the field". They typically provide lower performance levels when compared to Application Specific Integrated Circuits (ASICs), but at a lower cost, especially when dealing with limited production volumes. Of course, on-the-field programmability itself (and re-programmability, in the vast majority of cases) is also a characteristic feature that makes FPGA more suitable for applications with changing specifications where an update of capabilities may be a desirable benefit. Moreover, the time needed to fulfill the design cycle for FPGA-based circuits (including of course testing and debug speed) is much reduced when compared to the design flow and time-to-market of ASICs. In this thesis work, we will see (Chapter 1) some common problems and strategies involved with the use of FPGAs and FPGA-based systems for Real Time Image Processing and Real Time Video Processing (in the following alsoindicated interchangeably with the acronym RTVP); we will then focus, in particular, on two applications. Firstly, Chapter 2 will cover the implementation of a novel algorithm for Visual Search, known as CDVS, which has been recently standardised as part of the MPEG-7 standard. Visual search is an emerging field in mobile applications which is rapidly becoming ubiquitous. However, typically, algorithms for this kind of applications are connected with a high leverage on computational power and complex elaborations: as a consequence, implementation efficiency is a crucial point, and this generally results in the need for custom designed hardware. Chapter 3 will cover the implementation of an algorithm for the compression of hyperspectral images which is bit-true compatible with the CCSDS-123.0 standard algorithm. Hyperspectral images are three dimensional matrices in which each 2D plane represents the image, as captured by the sensor, in a given spectral band: their size may range from several millions of pixels up to billions of pixels. Typical scenarios of use of hyperspectral images include airborne and satellite-borne remote sensing. As a consequence, major concerns are the limitedness of both processing power and communication links bandwidth: thus, a proper compression algorithm, as well as the efficiency of its implementation, is crucial. In both cases we will first of all examine the scope of the work with reference to current state-of-the-art. We will then see the proposed implementations in their main characteristics and, to conclude, we will consider the primary experimental results

    A New Automatic On-Board Multispectral Image Compression System for Leo Earth Observation Satellites

    Full text link
    corecore