118 research outputs found

    On the Representability of Complete Genomes by Multiple Competing Finite-Context (Markov) Models

    Get PDF
    A finite-context (Markov) model of order yields the probability distribution of the next symbol in a sequence of symbols, given the recent past up to depth . Markov modeling has long been applied to DNA sequences, for example to find gene-coding regions. With the first studies came the discovery that DNA sequences are non-stationary: distinct regions require distinct model orders. Since then, Markov and hidden Markov models have been extensively used to describe the gene structure of prokaryotes and eukaryotes. However, to our knowledge, a comprehensive study about the potential of Markov models to describe complete genomes is still lacking. We address this gap in this paper. Our approach relies on (i) multiple competing Markov models of different orders (ii) careful programming techniques that allow orders as large as sixteen (iii) adequate inverted repeat handling (iv) probability estimates suited to the wide range of context depths used. To measure how well a model fits the data at a particular position in the sequence we use the negative logarithm of the probability estimate at that position. The measure yields information profiles of the sequence, which are of independent interest. The average over the entire sequence, which amounts to the average number of bits per base needed to describe the sequence, is used as a global performance measure. Our main conclusion is that, from the probabilistic or information theoretic point of view and according to this performance measure, multiple competing Markov models explain entire genomes almost as well or even better than state-of-the-art DNA compression methods, such as XM, which rely on very different statistical models. This is surprising, because Markov models are local (short-range), contrasting with the statistical models underlying other methods, where the extensive data repetitions in DNA sequences is explored, and therefore have a non-local character

    Data compression for data archival, browse or quick-look

    Get PDF
    Soon after space and Earth science data is collected, it is stored in one or more archival facilities for later retrieval and analysis. Since the purpose of the archival process is to keep an accurate and complete record of data, any data compression used in an archival system must be lossless, and protect against propagation of error in the storage media. A browse capability for space and Earth science data is needed to enable scientists to check the appropriateness and quality of particular data sets before obtaining the full data set(s) for detailed analysis. Browse data produced for these purposes could be used to facilitate the retrieval of data from an archival facility. Quick-look data is data obtained directly from the sensor for either previewing the data or for an application that requires very timely analysis of the space or Earth science data. Two main differences between data compression techniques appropriate to browse and quick-look cases, are that quick-look can be more specifically tailored, and it must be limited in complexity by the relatively limited computational power available on space platforms

    Improved Sequential MAP estimation of CABAC encoded data with objective adjustment of the complexity/efficiency tradeoff

    No full text
    International audienceThis paper presents an efficient MAP estimator for the joint source-channel decoding of data encoded with a context adaptive binary arithmetic coder (CABAC). The decoding process is compatible with realistic implementations of CABAC in standards like H.264, i.e., handling adaptive probabilities, context modeling and integer arithmetic coding. Soft decoding is obtained using an improved sequential decoding technique, which allows to obtain various tradeoffs between complexity and efficiency. The algorithms are simulated in a context reminiscent of H264. Error detection is realized by exploiting on one side the properties of the binarization scheme and on the other side the redundancy left in the code string. As a result, the CABAC compression efficiency is preserved and no additional redundancy is introduced in the bit stream. Simulation results outline the efficiency of the proposed techniques for encoded data sent over AWGN and UMTS-OFDM channels

    Jeff Christensen and Kyle James Fausett v. Gloria Swenson and Burns Security Systems, Inc. : Reply Brief

    Get PDF
    APPEAL FROM THE FOURTH JUDICIAL DISTRICT COURT, UTAH COUNTY THE HONORABLE CULLEN Y. CHRISTENSE

    Real-time transmission of digital video using variable-length coding

    Get PDF
    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data

    Jeff Christensen and Kyle James Fausett v. Gloria Swenson and Burns Security Systems, Inc. : Reply Brief

    Get PDF
    APPEAL FROM THE FOURTH JUDICIAL DISTRICT COURT, UTAH COUNTY THE HONORABLE CULLEN Y. CHRISTENSE

    Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Get PDF
    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home)

    An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    Get PDF
    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design

    Space and Earth Science Data Compression Workshop

    Get PDF
    The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The focus was on scientists' data requirements, as well as constraints imposed by the data collection, transmission, distribution, and archival systems. The workshop consisted of several invited papers; two described information systems for space and Earth science data, four depicted analysis scenarios for extracting information of scientific interest from data collected by Earth orbiting and deep space platforms, and a final one was a general tutorial on image data compression

    Salt Lake City Corporation, Petitioner-Plaintiff-Appellee) v. Mark C. Haik, Respondent-Defendant-Appellant.

    Get PDF
    An Appeal from the Third Judicial District court Case No. 140900915 The Honorable Andrew Stone presidin
    corecore