507 research outputs found

    Multispectral Palmprint Encoding and Recognition

    Full text link
    Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z. Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral Palmprint Encoding for Human Recognition", International Conference on Computer Vision, 2011. MATLAB Code available: https://sites.google.com/site/zohaibnet/Home/code

    On Aliasing Effects in the Contourlet Filter Bank

    Get PDF
    Publication in the conference proceedings of EUSIPCO, Florence, Italy, 200

    Wavelets, approximation, and compression

    Get PDF
    Over the last decade or so, wavelets have had a growing impact on signal processing theory and practice, both because of the unifying role and their successes in applications. Filter banks, which lie at the heart of wavelet-based algorithms, have become standard signal processing operators, used routinely in applications ranging from compression to modems. The contributions of wavelets have often been in the subtle interplay between discrete-time and continuous-time signal processing. The purpose of this article is to look at wavelet advances from a signal processing perspective. In particular, approximation results are reviewed, and the implication on compression algorithms is discussed. New constructions and open problems are also addresse

    Vector quantization for efficient coding of upper subbands

    Get PDF
    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches

    A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity

    Full text link
    The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.Comment: 65 pages, 33 figures, 303 reference

    Reliable and Efficient coding Technique for Compression of Medical Images based on Region of Interest using Directional Filter Banks

    Get PDF
    Medical images carry huge and vital information. It is necessary to compress the medical images without losing its vital-ness. The proposed algorithm presents a new coding technique based on  image compression using contourlet transform used in different modalities of medical imaging. Recent reports on natural image compression have shown superior performance of contourlet transform, a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. As far as medical images are concerned the diagnosis part (ROI) is of much important compared to other regions. Therefore those portions are segmented from the whole image using  fuzzy C-means(FCM) clustering technique. Contourlet transform is then applied to ROI portion which performs Laplacian Pyramid(LP) and Directional Filter Banks. The region of less significance are compressed using Discrete Wavelet Transform and finally modified embedded zerotree wavelet algorithm is applied which uses six symbols instead of four symbols used in Shapiro’s EZW to the resultant image which shows better PSNR and high compression ratio.Â

    State-of-the-Art and Trends in Scalable Video Compression with Wavelet Based Approaches

    Get PDF
    3noScalable Video Coding (SVC) differs form traditional single point approaches mainly because it allows to encode in a unique bit stream several working points corresponding to different quality, picture size and frame rate. This work describes the current state-of-the-art in SVC, focusing on wavelet based motion-compensated approaches (WSVC). It reviews individual components that have been designed to address the problem over the years and how such components are typically combined to achieve meaningful WSVC architectures. Coding schemes which mainly differ from the space-time order in which the wavelet transforms operate are here compared, discussing strengths and weaknesses of the resulting implementations. An evaluation of the achievable coding performances is provided considering the reference architectures studied and developed by ISO/MPEG in its exploration on WSVC. The paper also attempts to draw a list of major differences between wavelet based solutions and the SVC standard jointly targeted by ITU and ISO/MPEG. A major emphasis is devoted to a promising WSVC solution, named STP-tool, which presents architectural similarities with respect to the SVC standard. The paper ends drawing some evolution trends for WSVC systems and giving insights on video coding applications which could benefit by a wavelet based approach.partially_openpartially_openADAMI N; SIGNORONI. A; R. LEONARDIAdami, Nicola; Signoroni, Alberto; Leonardi, Riccard

    The Wavelet Transform for Image Processing Applications

    Get PDF

    Localized temporal decorrelation for video compression

    Get PDF
    Many of the current video compression algorithms perform analysis and coding operations in a block-wise manner. Most of them use a motion compensated DCT algorithm as the basis. Many other codecs, mostly academic and in their infancy and known as Second Generation techniques, utilize region and contour based and model based techniques. Unfortunately, these second-generation methods have not been successful in gaining widespread acceptance in both the standards and the consumer world. Many of them require specialized computationally intensive software and/or hardware. Due to these shortcomings, current block based methods have been finetuned to get better performance at even very low bit rates (sub 64 kbps). Block based motion estimation is the principal mechanism used to compensate for motion between frames in an image sequence. Although current algorithms are fast and quite effective, they fail in compensating for uncovered background areas in a frame. Solutions such as hierarchical motion estimation schemes do not work very well since there is no reference in past, and in some cases, future frames for an uncovered background resulting in the block being transmitted as an intra frame (which requires the most bandwidth among all type of blocks). This thesis intro duces an intermediate stage, which compensates for these isolated uncovered areas. The intermediate stage uses a localized decorrelation technique to reduce frame to frame temporal redundancies. The algorithm can be easily incorporated into exist ing systems to achieve an even better performance and can be easily extended as a scalable video coding architecture. Experimental results show that the algorithm, used in conjunction with motion estimation, is quite effective in reducing temporal redundancies
    • …
    corecore