6 research outputs found

    Issues in Implementing Block-Based Image Compression Techniques on Parallel MIMD Architectures

    No full text
    Although block-based image compression techniques seem to be straightforward to implement on parallel MIMD architectures, problems might arise due to architectural restrictions on such parallel machines (e.g. memory constraints on distributed memory architectures). In this paper we discuss possible solutions to such problems occurring in different image compression techniques. Experimental results are included for adaptive wavelet block coding and fractal compression. Keywords: Image Compression, Parallel Algorithms, MIMD Architectures 1 INTRODUCTION One of the ironies to come out of image compression research is that as the data rates come down the computational complexity of the algorithms increases. This leads to the problem of long execution times to compress an image or image sequence. This shows the "need for speed" in image and video compression. 22 Unfortunately many compression techniques demand execution times that are not possible using a single serial microprocessor. The..

    Parallel algorithms for fractal image coding on MIMD architectures

    No full text
    In this paper parallel algorithms for fractal image coding on MIMD architectures are introduced and discussed. It turns out that the crucial point for the choice of a suitable parallelization strategy is the memory capacity of a processor element. Experimental results show a linear speedup of the proposed algorithms. 1 Introduction Fractal image coding ([3],[10],[5]) has generated much interest in the image compression community as competitor [4] with well established compression techniques (e.g. DCTJPEG [14]) and new emerging technologies (e.g. wavelets [2]). One of the main drawbacks of conventional fractal image coding is the high encoding complexity (whereas decoding complexity is much lower) compared to e.g. transform coding. On the other hand it offers good image quality after reconstruction due to its adaptive structure. For these reasons the use of general purpose high performance computers seems to be appropriate in order to accelerate the execution speed of fractal image cod..

    Towards a Standardised Testsuite to Assess Fingerprint Matching Robustness: The StirMark Toolkit – Cross-Feature Type Comparisons

    No full text
    Part 1: Research PapersInternational audienceWe propose to establish a standardised tool in fingerprint recognition robustness assessment, which is able to simulate a wide class of acquisition conditions, applicable to any given dataset and also of potential interest in forensic analysis. As an example, StirMark image manipulations (as being developed in the context of watermarking robustness assessment) are applied to fingerprint data to generate test data for robustness evaluations, thereby interpreting certain image manipulations as being highly related to realistic fingerprint acquisition conditions. Experimental results involving three types of fingerprint features and matching schemes (i.e. correlation-based, ridge feature-based, and minutiae-based) applied to FVC2004 data underline the need for standardised testing and a corresponding simulation toolset

    Experiments on Iris Biometric Template Protection

    No full text
    corecore