655 research outputs found

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Implementation of Vector Quantization for Image Compression - A Survey

    Get PDF
    This paper presents a survey on vector quantization for image compression. Moreover it provides a means of decomposition of the signal in an approach which takes the improvement of inter and intra band correlation as more lithe partition for higher dimension vector spaces. Thus, the image is compressed without information loss using artificial neural networks (ANN). Since 1988, a growing body of research has examined the use of VQ for the image compression. This paper discusses about vector quantization, its principle and examples, its various techniques and image compression its advantages and applications. Additionally this paper also provides a comparative table in the view of simplicity, storage space, robustness and transfer time of various vector quantization methods. In addition the proposed paper also presents a survey on different methods of vector quantization for image compression

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    Post-stack seismic data compression with multidimensional deep autoencoders

    Get PDF
    Seismic data are surveys from the Earth's subsurface with the goal of representing the geophysical characteristics from the region where they were obtained in order to be interpreted. These data can occupy hundreds of Gigabytes of storage, motivating their compression. In this work, we approach the problem of three-dimensional post-stack seismic data using models based on deep autoencoders. The deep autoencoder is a neural network that allows representing most of the information of a seismic data with a lower cost in comparison to its original representation. To the best of our knowledge, this is the rst work to deal with seismic compression using deep learning. Four compression methods for post-stack data are proposed: two based on a bi-dimensional compression, named 2D-based Seismic Data Compression(2DSC) and 2D-based Seismic Data Compression using Multi-resolution (2DSC-MR), and two based on three-dimensional compression, named 3D-based Seismic Data Compression (3DSC) and 3D-based Seismic Data Compression using Vector Quantization (3DSC-VQ). The 2DSC is our simplest method for seismic compression, in which the volume is compressed through its bi-dimensional sections. The 2DSC-MR extends the previous method by introducing the data compression in multiple resolutions. The 3DSC extends the 2DSC method by allowing the seismic data compression by using the three-dimensional volume instead of 2D slices. This method considers the similarity between sections to compress a whole volume with the cost of a single section. The 3DSC-VQ uses vector quantization aiming to extract more information from the seismic volumes in the encoding part. Our main goal is to compress the seismic data at low bit rates, attaining a high quality reconstruction. Experiments show that our methods can compress seismic data yielding PSNR values over 40 dB and bit rates below 1.0 bpv.Dados sísmicos s~ao mapeamentos da subsuperfície terrestre que têm como objetivo representar as características geofísicas da região onde eles foram obtidos de forma que possam ser interpretados. Esses dados podem ocupar centenas de Gigabytes de armazenamento, motivando sua compressão. Neste trabalho o problema de compressão de dados sísmicos tridimensionais pós-pilha é abordado usando modelos baseados em autocodificadores profundos. O autocodificador profundo é uma rede neural que permite representar a maior parte da informação contida em um dado sísmico com um custo menor que sua representação original. De acordo com nosso conhecimento, este é o primeiro trabalho a lidar com compressão de dados sísmicos utilizando aprendizado profundo. Dessa forma, através de aproximações sucessivas, são propostos quatro métodos de compressão de dados tridimensionais pós-pilha: dois baseados em compressão bidimensional, chamados Método de Compressão 2D de Dado Sísmico (2DSC) e Método de Compressão 2D de Dado Sísmico usando Multi-resolução (2DSC-MR), e dois baseados em compressão tridimensional, chamados Método de Compressão 3D de Dado Sísmico (3DSC) e Método de Compressão 3D de Dado Sísmico usando Quantização Vetorial (3DSC-VQ). O método 2DSC é o nosso método de compressão do dado sísmico mais simples, onde o volume é comprimido a partir de suas seções bidimensionais. O método 2DSC-MR estende o método anterior introduzindo a compressão do dado em múltiplas resoluções. O método 3DSC estende o método 2DSC permitindo a compressão do dado sísmico em sua forma volumétrica, considerando a similaridade entre seções para representar um volume inteiro com o custo de apenas uma seção. O método 3DSC-VQ utiliza quantização vetorial para relaxar a etapa de codificação do método anterior, dando maior liberdade à rede para extrair informação dos volumes sísmicos. O objetivo deste trabalho é comprimir o dado sísmico a baixas taxas de bits e com alta qualidade de reconstrução em termos de PSNR e bits-por-voxel (bpv). Experimentos mostram que os quatro métodos podem comprimir o dado sísmico fornecendo valores de PSNR acima de 40 dB a taxas de bits abaixo de 1.0 bpv.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superio

    Paralinguistic Privacy Protection at the Edge

    Full text link
    Voice user interfaces and digital assistants are rapidly entering our lives and becoming singular touch points spanning our devices. These always-on services capture and transmit our audio data to powerful cloud services for further processing and subsequent actions. Our voices and raw audio signals collected through these devices contain a host of sensitive paralinguistic information that is transmitted to service providers regardless of deliberate or false triggers. As our emotional patterns and sensitive attributes like our identity, gender, mental well-being, are easily inferred using deep acoustic models, we encounter a new generation of privacy risks by using these services. One approach to mitigate the risk of paralinguistic-based privacy breaches is to exploit a combination of cloud-based processing with privacy-preserving, on-device paralinguistic information learning and filtering before transmitting voice data. In this paper we introduce EDGY, a configurable, lightweight, disentangled representation learning framework that transforms and filters high-dimensional voice data to identify and contain sensitive attributes at the edge prior to offloading to the cloud. We evaluate EDGY's on-device performance and explore optimization techniques, including model quantization and knowledge distillation, to enable private, accurate and efficient representation learning on resource-constrained devices. Our results show that EDGY runs in tens of milliseconds with 0.2% relative improvement in ABX score or minimal performance penalties in learning linguistic representations from raw voice signals, using a CPU and a single-core ARM processor without specialized hardware.Comment: 14 pages, 7 figures. arXiv admin note: text overlap with arXiv:2007.1506

    Proceedings of the Scientific Data Compression Workshop

    Get PDF
    Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms

    Automatic facial recognition based on facial feature analysis

    Get PDF

    Digital image compression

    Get PDF

    The 1995 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on October 26-27, 1995, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival, and retrieval of large quantities of data in future Earth and space science missions. It consisted of fourteen presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The Workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center
    corecore