1,205 research outputs found

    Locally Adaptive Resolution (LAR) codec

    Get PDF
    The JPEG committee has initiated a study of potential technologies dedicated to future generation image compression systems. The idea is to design a new norm of image compression, named JPEG AIC (Advanced Image Coding), together with advanced evaluation methodologies, closely matching to human vision system characteristics. JPEG AIC thus aimed at defining a complete coding system able to address advanced functionalities such as lossy to lossless compression, scalability (spatial, temporal, depth, quality, complexity, component, granularity...), robustness, embed-ability, content description for image handling at object level... The chosen compression method would have to fit perceptual metrics defined by the JPEG community within the JPEG AIC project. In this context, we propose the Locally Adaptive Resolution (LAR) codec as a contribution to the relative call for technologies, tending to fit all of previous functionalities. This method is a coding solution that simultaneously proposes a relevant representation of the image. This property is exploited through various complementary coding schemes in order to design a highly scalable encoder. The LAR method has been initially introduced for lossy image coding. This efficient image compression solution relies on a content-based system driven by a specific quadtree representation, based on the assumption that an image can be represented as layers of basic information and local texture. Multiresolution versions of this codec have shown their efficiency, from low bit rates up to lossless compressed images. An original hierarchical self-extracting region representation has also been elaborated: a segmentation process is realized at both coder and decoder, leading to a free segmentation map. This later can be further exploited for color region encoding, image handling at region level. Moreover, the inherent structure of the LAR codec can be used for advanced functionalities such as content securization purposes. In particular, dedicated Unequal Error Protection systems have been produced and tested for transmission over the Internet or wireless channels. Hierarchical selective encryption techniques have been adapted to our coding scheme. Data hiding system based on the LAR multiresolution description allows efficient content protection. Thanks to the modularity of our coding scheme, complexity can be adjusted to address various embedded systems. For example, basic version of the LAR coder has been implemented onto FPGA platform while respecting real-time constraints. Pyramidal LAR solution and hierarchical segmentation process have also been prototyped on DSPs heterogeneous architectures. This chapter first introduces JPEG AIC scope and details associated requirements. Then we develop the technical features, of the LAR system, and show the originality of the proposed scheme, both in terms of functionalities and services. In particular, we show that the LAR coder remains efficient for natural images, medical images, and art images

    WG1N5315 - Response to Call for AIC evaluation methodologies and compression technologies for medical images: LAR Codec

    Get PDF
    This document presents the LAR image codec as a response to Call for AIC evaluation methodologies and compression technologies for medical images.This document describes the IETR response to the specific call for contributions of medical imaging technologies to be considered for AIC. The philosophy behind our coder is not to outperform JPEG2000 in compression; our goal is to propose an open source, royalty free, alternative image coder with integrated services. While keeping the compression performances in the same range as JPEG2000 but with lower complexity, our coder also provides services such as scalability, cryptography, data hiding, lossy to lossless compression, region of interest, free region representation and coding

    Robust and efficient video/image transmission

    Get PDF
    The Internet has become a primary medium for information transmission. The unreliability of channel conditions, limited channel bandwidth and explosive growth of information transmission requests, however, hinder its further development. Hence, research on robust and efficient delivery of video/image content is demanding nowadays. Three aspects of this task, error burst correction, efficient rate allocation and random error protection are investigated in this dissertation. A novel technique, called successive packing, is proposed for combating multi-dimensional (M-D) bursts of errors. A new concept of basis interleaving array is introduced. By combining different basis arrays, effective M-D interleaving can be realized. It has been shown that this algorithm can be implemented only once and yet optimal for a set of error bursts having different sizes for a given two-dimensional (2-D) array. To adapt to variable channel conditions, a novel rate allocation technique is proposed for FineGranular Scalability (FGS) coded video, in which real data based rate-distortion modeling is developed, constant quality constraint is adopted and sliding window approach is proposed to adapt to the variable channel conditions. By using the proposed technique, constant quality is realized among frames by solving a set of linear functions. Thus, significant computational simplification is achieved compared with the state-of-the-art techniques. The reduction of the overall distortion is obtained at the same time. To combat the random error during the transmission, an unequal error protection (UEP) method and a robust error-concealment strategy are proposed for scalable coded video bitstreams

    Low Complexity Scalable Iterative Algorithms for IEEE 802.11p Receivers

    Get PDF
    In this paper, we investigate receivers for Vehicular to Vehicular (V2V) and Vehicular to Infrastructure (V2I) communications. Vehicular channels are characterized by multiple paths and time variations, which introduces challenges in the design of receivers. We propose an algorithm for IEEE 802.11p compliant receivers, based on Orthogonal Frequency Division Multiplexing (OFDM). We employ iterative structures in the receiver as a way to estimate the channel despite variations within a frame. The channel estimator is based on factor graphs, which allow the design of soft iterative receivers while keeping an acceptable computational complexity. Throughout this work, we focus on designing a receiver offering a good complexity performance trade-off. Moreover, we propose a scalable algorithm in order to be able to tune the trade-off depending on the channel conditions. Our algorithm allows reliable communications while offering a considerable decrease in computational complexity. In particular, numerical results show the trade-off between complexity and performance measured in computational time and BER as well as FER achieved by various interpolation lengths used by the estimator which both outperform by decades the standard least square solution. Furthermore our adaptive algorithm shows a considerable improvement in terms of computational time and complexity against state of the art and classical receptors whilst showing acceptable BER and FER performance

    Spiking neural networks trained with backpropagation for low power neuromorphic implementation of voice activity detection

    Full text link
    Recent advances in Voice Activity Detection (VAD) are driven by artificial and Recurrent Neural Networks (RNNs), however, using a VAD system in battery-operated devices requires further power efficiency. This can be achieved by neuromorphic hardware, which enables Spiking Neural Networks (SNNs) to perform inference at very low energy consumption. Spiking networks are characterized by their ability to process information efficiently, in a sparse cascade of binary events in time called spikes. However, a big performance gap separates artificial from spiking networks, mostly due to a lack of powerful SNN training algorithms. To overcome this problem we exploit an SNN model that can be recast into an RNN-like model and trained with known deep learning techniques. We describe an SNN training procedure that achieves low spiking activity and pruning algorithms to remove 85% of the network connections with no performance loss. The model achieves state-of-the-art performance with a fraction of power consumption comparing to other methods.Comment: 5 pages, 2 figures, 2 table

    Advanced Methods for Real-time Metagenomic Analysis of Nanopore Sequencing Data

    Get PDF
    Whole shotgun metagenomics sequencing allows researchers to retrieve information about all organisms in a complex sample. This method enables microbiologists to detect pathogens in clinical samples, study the microbial diversity in various environments, and detect abundance differences of certain microbes under different living conditions. The emergence of nanopore sequencing has offered many new possibilities for clinical and environmental microbiologists. In particular, the portability of the small nanopore sequencing devices and the ability to selectively sequence only DNA from interesting organisms are expected to make a significant contribution to the field. However, both options require memory-efficient methods that perform real-time data analysis on commodity hardware like usual laptops. In this thesis, I present new methods for real-time analysis of nanopore sequencing data in a metagenomic context. These methods are based on optimized algorithmic approaches querying the sequenced data against large sets of reference sequences. The main goal of those contributions is to improve the sequencing and analysis of underrepresented organisms in complex metagenomic samples and enable this analysis in low-resource settings in the field. First, I introduce ReadBouncer as a new tool for nanopore adaptive sampling, which can reject uninteresting DNA molecules during the sequencing process. ReadBouncer improves read classification compared to other adaptive sampling tools and has fewer memory requirements. These improvements enable a higher enrichment of underrepresented sequences while performing adaptive sampling in the field. I further show that, besides host sequence removal and enrichment of low-abundant microbes, adaptive sampling can enrich underrepresented plasmid sequences in bacterial samples. These plasmids play a crucial role in the dissemination of antibiotic resistance genes. However, their characterization requires expensive and time-consuming lab protocols. I describe how adaptive sampling can be used as a cheap method for the enrichment of plasmids, which can make a significant contribution to the point-of-care sequencing of bacterial pathogens. Finally, I introduce a novel memory- and space-efficient algorithm for real-time taxonomic profiling of nanopore reads that was implemented in Taxor. It improves the taxonomic classification of nanopore reads compared to other taxonomic profiling tools and tremendously reduces the memory footprint. The resulting database index for thousands of microbial species is small enough to fit into the memory of a small laptop, enabling real-time metagenomics analysis of nanopore sequencing data with large reference databases in the field

    Expanded Parts Model for Semantic Description of Humans in Still Images

    Get PDF
    We introduce an Expanded Parts Model (EPM) for recognizing human attributes (e.g. young, short hair, wearing suit) and actions (e.g. running, jumping) in still images. An EPM is a collection of part templates which are learnt discriminatively to explain specific scale-space regions in the images (in human centric coordinates). This is in contrast to current models which consist of a relatively few (i.e. a mixture of) 'average' templates. EPM uses only a subset of the parts to score an image and scores the image sparsely in space, i.e. it ignores redundant and random background in an image. To learn our model, we propose an algorithm which automatically mines parts and learns corresponding discriminative templates together with their respective locations from a large number of candidate parts. We validate our method on three recent challenging datasets of human attributes and actions. We obtain convincing qualitative and state-of-the-art quantitative results on the three datasets.Comment: Accepted for publication in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI

    An overview of JPEG 2000

    Get PDF
    JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.
    • …
    corecore