8 research outputs found

    A Survey of the methods on fingerprint orientation field estimation

    Get PDF
    Fingerprint orientation field (FOF) estimation plays a key role in enhancing the performance of the automated fingerprint identification system (AFIS): Accurate estimation of FOF can evidently improve the performance of AFIS. However, despite the enormous attention on the FOF estimation research in the past decades, the accurate estimation of FOFs, especially for poor-quality fingerprints, still remains a challenging task. In this paper, we devote to review and categorization of the large number of FOF estimation methods proposed in the specialized literature, with particular attention to the most recent work in this area. Broadly speaking, the existing FOF estimation methods can be grouped into three categories: gradient-based methods, mathematical models-based methods, and learning-based methods. Identifying and explaining the advantages and limitations of these FOF estimation methods is of fundamental importance for fingerprint identification, because only a full understanding of the nature of these methods can shed light on the most essential issues for FOF estimation. In this paper, we make a comprehensive discussion and analysis of these methods concerning their advantages and limitations. We have also conducted experiments using publically available competition dataset to effectively compare the performance of the most relevant algorithms and methods

    3D minutiae extraction in 3D fingerprint scans.

    Get PDF
    Traditionally, fingerprint image acquisition was based on contact. However the conventional touch-based fingerprint acquisition introduces some problems such as distortions and deformations to the fingerprint image. The most recent technology for fingerprint acquisition is touchless or 3D live scans introducing higher quality fingerprint scans. However, there is a need to develop new algorithms to match 3D fingerprints. In this dissertation, a novel methodology is proposed to extract minutiae in the 3D fingerprint scans. The output can be used for 3D fingerprint matching. The proposed method is based on curvature analysis of the surface. The method used to extract minutiae includes the following steps: smoothing; computing the principal curvature; ridges and ravines detection and tracing; cleaning and connecting ridges and ravines; and minutiae detection. First, the ridges and ravines are detected using curvature tensors. Then, ridges and ravines are traced. Post-processing is performed to obtain clean and connected ridges and ravines based on fingerprint pattern. Finally, minutiae are detected using a graph theory concept. A quality map is also introduced for 3D fingerprint scans. Since a degraded area may occur during the scanning process, especially at the edge of the fingerprint, it is critical to be able to determine these areas. Spurious minutiae can be filtered out after applying the quality map. The algorithm is applied to the 3D fingerprint database and the result is very encouraging. To the best of our knowledge, this is the first minutiae extraction methodology proposed for 3D fingerprint scans

    Intelligent X-ray imaging inspection system for the food industry.

    Get PDF
    The inspection process of a product is an important stage of a modern production factory. This research presents a generic X-ray imaging inspection system with application for the detection of foreign bodies in a meat product for the food industry. The most important modules in the system are the image processing module and the high-level detection system. This research discusses the use of neural networks for image processing and fuzzy-logic for the detection of potential foreign bodies found in x-ray images of chicken breast meat after the de-boning process. The meat product is passed under a solid-state x-ray sensor that acquires a dual-band two-dimensional image of the meat (a low- and a high energy image). A series of image processing operations are applied to the acquired image (pre-processing, noise removal, contrast enhancement). The most important step of the image processing is the segmentation of the image into meaningful objects. The segmentation task is a difficult one due to the lack of clarity of the acquired X-ray images and the resulting segmented image represents not only correctly identified foreign bodies but also areas caused by overlapping muscle regions in the meat which appear very similar to foreign bodies in the resulting x-ray image. A Hopfield neural network architecture was proposed for the segmentation of a X-ray dual-band image. A number of image processing measurements were made on each object (geometrical and grey-level based statistical features) and these features were used as the input into a fuzzy logic based high-level detection system whose function was to differentiate between bones and non-bone segmented regions. The results show that system's performance is considerably improved over non-fuzzy or crisp methods. Possible noise affecting the system is also investigated. The proposed system proved to be robust and flexible while achieving a high level of performance. Furthermore, it is possible to use the same approach when analysing images from other applications areas from the automotive industry to medicine

    Development of medical image/video segmentation via deep learning models

    Get PDF
    Image segmentation has a critical role in medical diagnosis systems as it is mostly the initial stage, and any error would be propagated in the subsequent analysis. Certain challenges, including Irregular border, low quality of images, small Region of Interest (RoI) and complex structures such as overlapping cells in images impede the improvement of medical image analysis. Deep learning-based algorithms have recently brought superior achievements in computer vision. However, there are limitations to their application in the medical domain including data scarcity, and lack of pretrained models on medical data. This research addresses the issues that hinder the progress of deep learning methods on medical data. Firstly, the effectiveness of transfer learning from a pretrained model with dissimilar data is investigated. The model is improved by integrating feature maps from the frequency domain into the spatial feature maps of Convolutional Neural Network (CNN). Training from scratch and the challenges ahead were explored as well. The proposed model shows higher performance compared to state-of-the-art methods by %2:2 and %17 in Jaccard index for tasks of lesion segmentation and dermoscopic feature segmentation respectively. Furthermore, the proposed model benefits from significant improvement for noisy images without preprocessing stage. Early stopping and drop out layers were considered to tackle the overfitting and network hyper-parameters such as different learning rate, weight initialization, kernel size, stride and normalization techniques were investigated to enhance learning performance. In order to expand the research into video segmentation, specifically left ventricular segmentation, U-net deep architecture was modified. The small RoI and confusion between overlapped organs are big challenges in MRI segmentation. The consistent motion of LV and the continuity of neighbor frames are important features that were used in the proposed architecture. High level features including optical flow and contourlet were used to add temporal information and the RoI module to the Unet model. The proposed model surpassed the results of original Unet model for LV segmentation by a %7 increment in Jaccard index
    corecore