209,117 research outputs found

    Java Library for Input and Output of Image Data and Metadata

    Get PDF
    A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata

    Fractal block coding techniques in image compression

    Get PDF
    Fractal block coding is a relatively new scheme for image compression. In this dissertation, several ádvanced schemes are proposed based upon Jacquin’s fractal block coding scheme. Exploiting self-similarity at different target block size levels is proposed which allows the self-similarity in the image to be exploited further. Smoother areas are coded with bigger target block sizes while fíne details are coded with smaller target block sizes. More image parts coded at a higher coding level will result in a lower bit rate. Removal of affine-block-wise self-similarity is proposed which includes block-wise self-similarity as a special case. With the utilisation of affineblock-wise self-similarity, the library is substantially enriched which results in a higher probability of coding a target block at a higher coding level. A very fast multi-level fractal block coding scheme exploiting affine-block-wise selfsimilarities is proposed. In the fast coding scheme, self-similarity in the very local area of the target block to be coded is exploited. By using affine-block-wise self-similarity, local correlations are exploited to a much further extent. The number of library blocks used for coding a target block is substantially reduced which results in very fast coding scheme. The proposed fast coding scheme outperforms previous implementations of the fractal block coding technique. A hybrid fractal block coding and DCT scheme is proposed which codes a subsampled image using fractal block coding techniques. The fractal codes are used to decode by zooming to the original image size. The DCT technique is introduced to code the residue image. The proposed scheme is better than the pure fractal block coding scheme. The advanced fractal block coding schemes and the hybrid coder for still images are also applied to video compression which also give some promising simulation results

    Library-based image coding using vector quantization of the prediction space

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1993.Includes bibliographical references (leaves 122-126).by Nuno Miguel Borges de Pinho Cruz de Vasconcelos.M.S

    Electronic marking and identification techniques to discourage document copying

    Get PDF
    Modern computer networks make it possible to distribute documents quickly and economically by electronic means rather than by conventional paper means. However, the widespread adoption of electronic distribution of copyrighted material is currently impeded by the ease of illicit copying and dissemination. In this paper we propose techniques that discourage illicit distribution by embedding each document with a unique codeword. Our encoding techniques are indiscernible by readers, yet enable us to identify the sanctioned recipient of a document by examination of a recovered document. We propose three coding methods, describe one in detail, and present experimental results showing that our identification techniques are highly reliable, even after documents have been photocopied

    Interfaces

    Get PDF
    In this course, coupled problems with interfaces are considered. Some applications and examples are discussed first. Then, interfaces are defined and classified into three categories. Numerical modeling of interfaces is a central aspect in this presentation. These theoretically-oriented parts are followed by numerical simulations using an open-source fluid-structure interaction benchmark code based on the finite element library deal.II. For joint coding, a docker image was installed on qarnot and repl.it for cloud computing.Course held at the CSMA Junior section workshop ahead of the 14th WCCM & EDDOMAS Congress 202

    A Perceptually Based Comparison of Image Similarity Metrics

    Full text link
    The assessment of how well one image matches another forms a critical component both of models of human visual processing and of many image analysis systems. Two of the most commonly used norms for quantifying image similarity are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric, better than the other, captures the perceptual notion of image similarity. This can be used to derive inferences regarding similarity criteria the human visual system uses, as well as to evaluate and design metrics for use in image-analysis applications. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created by vector quantization. In both conditions the participants showed a small but consistent preference for images matched with the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity

    Performance evaluation of the Mojette erasure code for fault-tolerant distributed hot data storage

    Get PDF
    Packet erasure codes are today a real alternative to replication in fault tolerant distributed storage systems. In this paper, we propose the Mojette erasure code based on the Mojette transform, a formerly tomographic tool. The performance of coding and decoding are compared to the Reed-Solomon code implementations of the two open-source reference libraries namely ISA-L and Jerasure 2.0. Results clearly show better performances for our discrete geometric code compared to the classical algebraic approaches. A gain factor up to 22 is measured in comparison with the ISA-L Intel . Those very good performances allow to deploy Mojette erasure code for hot data distributed storage and I/O intensive applications.Comment: 5 page

    Near-capacity dirty-paper code design : a source-channel coding approach

    Get PDF
    This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity

    De Bruijn Structured Illumination Studying Within The Task Of Restoring Hands Relief

    Get PDF
    In the course of studies on the problem of restoring hands relief, using the de Bruijn structured illumination, methods of solving this problem are proposed. This is a method of simple quantitative detection of Hough segments on the skin of the hand, a method of qualitative visual evaluation of the effectiveness of the color palette using the dominant color, and a method of the weight coefficients of the components of the color palette.The proposed methods make it possible to quantitatively determine the optimal choice of the color scheme for generating the de Bruijn bands when illumination of the hand, to restore its relief.The work describes the stages of this study, led from visual observation to a full quantitative calculation of the quality of calibration illuminations, with the possibility of their optimal choice.In the course of experiments and observations, the requirements for the technical support of research were developed to achieve the best quality of the images of the hands. Also, the paper presents a high-speed de Bruijn sequence generating algorithm using Lyndon's words, which excludes the search for Euler chains or Hamiltonian cycles, for various kinds of de Bruijn graphs. With its help, the generation of structured light patterns with various color schemes was carried out, with the purpose of further analysis of their use in 3D reconstruction systems of hands
    corecore