1,014 research outputs found

    Low-Quality Fingerprint Classification

    Get PDF
    Traditsioonilised sõrmejälgede tuvastamise süsteemid kasutavad otsuste tegemisel minutiae punktide informatsiooni. Nagu selgub paljude varasemate tööde põhjal, ei ole sõrmejälgede pildid mitte alati piisava kvaliteediga, et neid saaks kasutada automaatsetes sõrmejäljetuvastuse süsteemides. Selle takistuse ületamiseks keskendub magistritöö väga madala kvaliteediga sõrmejälgede piltide tuvastusele – sellistel piltidel on mitmed üldteada moonutused, nagu kuivus, märgus, füüsiline vigastatus, punktide olemasolu ja hägusus. Töö eesmärk on välja töötada efektiivne ja kõrge täpsusega sügaval närvivõrgul põhinev algoritm, mis tunneb sõrmejälje ära selliselt madala kvaliteediga pildilt. Eksperimentaalsed katsed sügavõppepõhise meetodiga näitavad kõrget tulemuslikkust ja robustsust, olles rakendatud praktikast kogutud madala kvaliteediga sõrmejälgede andmebaasil. VGG16 baseeruv sügavõppe närvivõrk saavutas kõrgeima tulemuslikkuse kuivade (93%) ja madalaima tulemuslikkuse häguste (84%) piltide klassifitseerimisel.Fingerprint recognition systems mainly use minutiae points information. As shown in many previous research works, fingerprint images do not always have good quality to be used by automatic fingerprint recognition systems. To tackle this challenge, in this thesis, we are focusing on very low-quality fingerprint images, which contain several well-known distortions such as dryness, wetness, physical damage, presence of dots, and blurriness. We develop an efficient, with high accuracy, deep neural network algorithm, which recognizes such low-quality fingerprints. The experimental results have been conducted on real low-quality fingerprint database, and the achieved results show the high performance and robustness of the introduced deep network technique. The VGG16 based deep network achieves the highest performance of 93% for dry and the lowest of 84% for blurred fingerprint classes

    Recreating Fingerprint Images by Convolutional Neural Network Autoencoder Architecture

    Get PDF
    Fingerprint recognition systems have been applied widely to adopt accurate and reliable biometric identification between individuals. Deep learning, especially Convolutional Neural Network (CNN) has made a tremendous success in the field of computer vision for pattern recognition. Several approaches have been applied to reconstruct fingerprint images. However, these algorithms encountered problems with various overlapping patterns and poor quality on the images. In this work, a convolutional neural network autoencoder has been used to reconstruct fingerprint images. An autoencoder is a technique, which is able to replicate data in the images. The advantage of convolutional neural networks makes it suitable for feature extraction. Four datasets of fingerprint images have been used to prove the robustness of the proposed architecture. The dataset of fingerprint images has been collected from various real resources. These datasets include a fingerprint verification competition (FVC2004) database, which has been distorted. The proposed approach has been assessed by calculating the cumulative match characteristics (CMC) between the reconstructed and the original features. We obtained promising results of identification rate from four datasets of fingerprints images (Dataset I, Dataset II, Dataset III, Dataset IV) with 98.1%, 97%, 95.9%, and 95.02% respectively by CNN autoencoder. The proposed architecture was tested and compared to the other state-of-the-art methods. The achieved experimental results show that the proposed solution is suitable for recreating a complex context of fingerprinting images

    Machine Learning Algorithms for Breast Cancer Diagnosis: Challenges, Prospects and Future Research Directions

    Get PDF
    Early diagnosis of breast cancer does not only increase the chances of survival but also control the diffusion of cancerous cells in the body. Previously, researchers have developed machine learning algorithms in breast cancer diagnosis such as Support Vector Machine, K-Nearest Neighbor, Convolutional Neural Network, K-means, Fuzzy C-means, Neural Network, Principle Component Analysis (PCA) and Naive Bayes. Unfortunately these algorithms fall short in one way or another due to high levels of computational complexities. For instance, support vector machine employs feature elimination scheme for eradicating data ambiguity and detecting tumors at initial stage. However this scheme is expensive in terms of execution time. On its part, k-means algorithm employs Euclidean distance to determine the distance between cluster centers and data points. However this scheme does not guarantee high accuracy when executed in different iterations. Although the K-nearest Neighbor algorithm employs feature reduction, principle component analysis and 10 fold cross validation methods for enhancing classification accuracy, it is not efficient in terms of processing time. On the other hand, fuzzy c-means algorithm employs fuzziness value and termination criteria to determine the execution time on datasets. However, it proves to be extensive in terms of computational time due to several iterations and fuzzy measure calculations involved. Similarly, convolutional neural network employed back propagation and classification method but the scheme proves to be slow due to frequent retraining. In addition, the neural network achieves low accuracy in its predictions. Since all these algorithms seem to be expensive and time consuming, it necessary to integrate quantum computing principles with conventional machine learning algorithms. This is because quantum computing has the potential to accelerate computations by simultaneously carrying out calculation on many inputs. In this paper, a review of the current machine learning algorithms for breast cancer prediction is provided. Based on the observed shortcomings, a quantum machine learning based classifier is recommended. The proposed working mechanisms of this classifier are elaborated towards the end of this paper

    GAIT Technology for Human Recognition using CNN

    Get PDF
    Gait is a distinctive biometric characteristic that can be detected from a distance; as a result, it has several uses in social security, forensic identification, and crime prevention. Existing gait identification techniques use a gait template, which makes it difficult to keep temporal information, or a gait sequence, which maintains pointless sequential limitations and loses the ability to portray a gait. Our technique, which is based on this deep set viewpoint, is immune to frame permutations and can seamlessly combine frames from many videos that were taken in various contexts, such as diversified watching, angles, various outfits, or various situations for transporting something. According to experiments, our single-model strategy obtains an average rank-1 accuracy of 96.1% on the CASIA-B gait dataset and an accuracy of 87.9% on the OU-MVLP gait dataset when used under typical walking conditions. Our model also demonstrates a great degree of robustness under numerous challenging circumstances. When carrying bags and wearing a coat while walking, it obtains accuracy on the CASIA-B of 90.8% and 70.3%, respectively, greatly surpassing the best approach currently in use. Additionally, the suggested method achieves a satisfactory level of accuracy even when there are few frames available in the test samples; for instance, it achieves 85.0% on the CASIA-B even with only 7 frames

    Neuropathy Classification of Corneal Nerve Images Using Artificial Intelligence

    Get PDF
    Nerve variations in the human cornea have been associated with alterations in the neuropathy state of a patient suffering from chronic diseases. For some diseases, such as diabetes, detection of neuropathy prior to visible symptoms is important, whereas for others, such as multiple sclerosis, early prediction of disease worsening is crucial. As current methods fail to provide early diagnosis of neuropathy, in vivo corneal confocal microscopy enables very early insight into the nerve damage by illuminating and magnifying the human cornea. This non-invasive method captures a sequence of images from the corneal sub-basal nerve plexus. Current practices of manual nerve tracing and classification impede the advancement of medical research in this domain. Since corneal nerve analysis for neuropathy is in its initial stages, there is a dire need for process automation. To address this limitation, we seek to automate the two stages of this process: nerve segmentation and neuropathy classification of images. For nerve segmentation, we compare the performance of two existing solutions on multiple datasets to select the appropriate method and proceed to the classification stage. Consequently, we approach neuropathy classification of the images through artificial intelligence using Adaptive Neuro-Fuzzy Inference System, Support Vector Machines, Naïve Bayes and k-nearest neighbors. We further compare the performance of machine learning classifiers with deep learning. We ascertained that nerve segmentation using convolutional neural networks provided a significant improvement in sensitivity and false negative rate by at least 5% over the state-of-the-art software. For classification, ANFIS yielded the best classification accuracy of 93.7% compared to other classifiers. Furthermore, for this problem, machine learning approaches performed better in terms of classification accuracy than deep learning

    Classification of Gender Individual Identification Using Local Binary Pattern on Palatine Rugae Image

    Get PDF
    Major disasters caused many casualties with the condition of the damaged bodies. It causes the individual identification process to be ineffective through biometric characteristics (such as lips and fingerprints). However, the palatine rugae can carry the individual identification process. Palatine rugae have unique and individual characteristics and are more resistant to trauma because of their internal location. In this study, an individual identification system is proposed to identify gender using the image of palatine rugae. The proposed system is developed by several algorithms and methods, such as Local Binary Pattern (LBP) as the feature extraction method and K-Nearest Neighbor (KNN) as the classification method. Based on the result of the system performed test, the proposed system can identify the gender of an individual by the combination of recognized palatine rugae patterns. The system achieved an accuracy test result of 100% with a specific configuration of LBP and KNN. The research contribution in this study is to develop the individual gender identification system, which proceeds with the palatine rugae pattern image with unique biometric characteristics as an input. The system applied several methods and algorithms, such as Geometric Active Contour (GAC) as a segmentation algorithm, Local Binary Pattern (LBP) as a feature extraction method, and K Nearest Neighbor (KNN) as a classification method

    Applications of pattern classification to time-domain signals

    Get PDF
    Many different kinds of physics are used in sensors that produce time-domain signals, such as ultrasonics, acoustics, seismology, and electromagnetics. The waveforms generated by these sensors are used to measure events or detect flaws in applications ranging from industrial to medical and defense-related domains. Interpreting the signals is challenging because of the complicated physics of the interaction of the fields with the materials and structures under study. often the method of interpreting the signal varies by the application, but automatic detection of events in signals is always useful in order to attain results quickly with less human error. One method of automatic interpretation of data is pattern classification, which is a statistical method that assigns predicted labels to raw data associated with known categories. In this work, we use pattern classification techniques to aid automatic detection of events in signals using features extracted by a particular application of the wavelet transform, the Dynamic Wavelet Fingerprint (DWFP), as well as features selected through physical interpretation of the individual applications. The wavelet feature extraction method is general for any time-domain signal, and the classification results can be improved by features drawn for the particular domain. The success of this technique is demonstrated through four applications: the development of an ultrasonographic periodontal probe, the identification of flaw type in Lamb wave tomographic scans of an aluminum pipe, prediction of roof falls in a limestone mine, and automatic identification of individual Radio Frequency Identification (RFID) tags regardless of its programmed code. The method has been shown to achieve high accuracy, sometimes as high as 98%
    corecore