10,370 research outputs found

    Multispectral Palmprint Encoding and Recognition

    Full text link
    Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z. Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral Palmprint Encoding for Human Recognition", International Conference on Computer Vision, 2011. MATLAB Code available: https://sites.google.com/site/zohaibnet/Home/code

    Open source tool for DSMs generation from high resolution optical satellite imagery. Development and testing of an OSSIM plug-in

    Get PDF
    The fully automatic generation of digital surface models (DSMs) is still an open research issue. From recent years, computer vision algorithms have been introduced in photogrammetry in order to exploit their capabilities and efficiency in three-dimensional modelling. In this article, a new tool for fully automatic DSMs generation from high resolution satellite optical imagery is presented. In particular, a new iterative approach in order to obtain the quasi-epipolar images from the original stereopairs has been defined and deployed. This approach is implemented in a new Free and Open Source Software (FOSS) named Digital Automatic Terrain Extractor (DATE) developed at the Geodesy and Geomatics Division, University of Rome ‘La Sapienza’, and conceived as an Open Source Software Image Map (OSSIM) plug-in. DATE key features include: the epipolarity achievement in the object space, thanks to the images ground projection (Ground quasi-Epipolar Imagery (GrEI)) and the coarse-to-fine pyramidal scheme adopted; the use of computer vision algorithms in order to improve the processing efficiency and make the DSMs generation process fully automatic; the free and open source aspect of the developed code. The implemented plug-in was validated through two optical datasets, GeoEye-1 and the newest Pléiades-high resolution (HR) imagery, on Trento (Northern Italy) test site. The DSMs, generated on the basis of the metadata rational polynomial coefficients only, without any ground control point, are compared to a reference lidar in areas with different land use/land cover and morphology. The results obtained thanks to the developed workflow are good in terms of statistical parameters (root mean square error around 5 m for GeoEye-1 and around 4 m for Pléiades-HR imagery) and comparable with the results obtained through different software by other authors on the same test site, whereas in terms of efficiency DATE outperforms most of the available commercial software. These first achievements indicate good potential for the developed plug-in, which in a very near future will be also upgraded for synthetic aperture radar and tri-stereo optical imagery processing

    Wavelets, ridgelets and curvelets on the sphere

    Full text link
    We present in this paper new multiscale transforms on the sphere, namely the isotropic undecimated wavelet transform, the pyramidal wavelet transform, the ridgelet transform and the curvelet transform. All of these transforms can be inverted i.e. we can exactly reconstruct the original data from its coefficients in either representation. Several applications are described. We show how these transforms can be used in denoising and especially in a Combined Filtering Method, which uses both the wavelet and the curvelet transforms, thus benefiting from the advantages of both transforms. An application to component separation from multichannel data mapped to the sphere is also described in which we take advantage of moving to a wavelet representation.Comment: Accepted for publication in A&A. Manuscript with all figures can be downloaded at http://jstarck.free.fr/aa_sphere05.pd

    Two-photon imaging and analysis of neural network dynamics

    Full text link
    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behaviour. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so called 'microcircuits') remains comparably poor. In large parts, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near- millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.Comment: 36 pages, 4 figures, accepted for publication in Reports on Progress in Physic
    corecore