1,018 research outputs found

    Multimodal Biometrics for Person Authentication

    Get PDF
    Unimodal biometric systems have limited effectiveness in identifying people, mainly due to their susceptibility to changes in individual biometric features and presentation attacks. The identification of people using multimodal biometric systems attracts the attention of researchers due to their advantages, such as greater recognition efficiency and greater security compared to the unimodal biometric system. To break into the biometric multimodal system, the intruder would have to break into more than one unimodal biometric system. In multimodal biometric systems: The availability of many features means that the multimodal system becomes more reliable. A multimodal biometric system increases security and ensures confidentiality of user data. A multimodal biometric system realizes the merger of decisions taken under individual modalities. If one of the modalities is eliminated, the system can still ensure security, using the remaining. Multimodal systems provide information on the “liveness” of the sample being introduced. In a multimodal system, a fusion of feature vectors and/or decisions developed by each subsystem is carried out, and then the final decision on identification is made on the basis of the vector of features thus obtained. In this chapter, we consider a multimodal biometric system that uses three modalities: dorsal vein, palm print, and periocular

    Fingerprint minutiae filtering based on multiscale directional information

    Get PDF
    Automatic identification of humans based on their fingerprints is still one of the most reliable identification methods in criminal and forensic applications, and is widely applied in civil applications as well. Most automatic systems available today use distinctive fingerprint features called minutiae for fingerprint comparison. Conventional feature extraction algorithm can produce a large number of spurious minutiae if fingerprint pattern contains large regions of broken ridges (often called creases). This can drastically reduce the recognition rate in automatic fingerprint identification systems. We can say that for performance of those systems it is more important not to extract spurious (false) minutia even though it means some genuine might be missing as well. In this paper multiscale directional information obtained from orientation field image is used to filter those spurious minutiae, resulting in multiple decrease of their number

    Regional, circuit and network heterogeneity of brain abnormalities in psychiatric disorders

    Full text link
    The substantial individual heterogeneity that characterizes people with mental illness is often ignored by classical case-control research, which relies on group mean comparisons. Here we present a comprehensive, multiscale characterization of the heterogeneity of gray matter volume (GMV) differences in 1,294 cases diagnosed with one of six conditions (attention-deficit/hyperactivity disorder, autism spectrum disorder, bipolar disorder, depression, obsessive-compulsive disorder and schizophrenia) and 1,465 matched controls. Normative models indicated that person-specific deviations from population expectations for regional GMV were highly heterogeneous, affecting the same area in <7% of people with the same diagnosis. However, these deviations were embedded within common functional circuits and networks in up to 56% of cases. The salience-ventral attention system was implicated transdiagnostically, with other systems selectively involved in depression, bipolar disorder, schizophrenia and attention-deficit/hyperactivity disorder. Phenotypic differences between cases assigned the same diagnosis may thus arise from the heterogeneous localization of specific regional deviations, whereas phenotypic similarities may be attributable to the dysfunction of common functional circuits and networks

    Non-ideal iris recognition

    Get PDF
    Of the many biometrics that exist, iris recognition is finding more attention than any other due to its potential for improved accuracy, permanence, and acceptance. Current iris recognition systems operate on frontal view images of good quality. Due to the small area of the iris, user co-operation is required. In this work, a new system capable of processing iris images which are not necessarily in frontal view is described. This overcomes one of the major hurdles with current iris recognition systems and enhances user convenience and accuracy. The proposed system is designed to operate in two steps: (i) preprocessing and estimation of the gaze direction and (ii) processing and encoding of the rotated iris image. Two objective functions are used to estimate the gaze direction. Later, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. Two methods: (i) PCA and (ii) ICA are used for encoding. Three different datasets are used to quantify performance of the proposed non-ideal recognition system

    Gait recognition and understanding based on hierarchical temporal memory using 3D gait semantic folding

    Get PDF
    Gait recognition and understanding systems have shown a wide-ranging application prospect. However, their use of unstructured data from image and video has affected their performance, e.g., they are easily influenced by multi-views, occlusion, clothes, and object carrying conditions. This paper addresses these problems using a realistic 3-dimensional (3D) human structural data and sequential pattern learning framework with top-down attention modulating mechanism based on Hierarchical Temporal Memory (HTM). First, an accurate 2-dimensional (2D) to 3D human body pose and shape semantic parameters estimation method is proposed, which exploits the advantages of an instance-level body parsing model and a virtual dressing method. Second, by using gait semantic folding, the estimated body parameters are encoded using a sparse 2D matrix to construct the structural gait semantic image. In order to achieve time-based gait recognition, an HTM Network is constructed to obtain the sequence-level gait sparse distribution representations (SL-GSDRs). A top-down attention mechanism is introduced to deal with various conditions including multi-views by refining the SL-GSDRs, according to prior knowledge. The proposed gait learning model not only aids gait recognition tasks to overcome the difficulties in real application scenarios but also provides the structured gait semantic images for visual cognition. Experimental analyses on CMU MoBo, CASIA B, TUM-IITKGP, and KY4D datasets show a significant performance gain in terms of accuracy and robustness

    An efficient multiscale scheme using local zernike moments for face recognition

    Get PDF
    In this study, we propose a face recognition scheme using local Zernike moments (LZM), which can be used for both identification and verification. In this scheme, local patches around the landmarks are extracted from the complex components obtained by LZM transformation. Then, phase magnitude histograms are constructed within these patches to create descriptors for face images. An image pyramid is utilized to extract features at multiple scales, and the descriptors are constructed for each image in this pyramid. We used three different public datasets to examine the performance of the proposed method:Face Recognition Technology (FERET), Labeled Faces in the Wild (LFW), and Surveillance Cameras Face (SCface). The results revealed that the proposed method is robust against variations such as illumination, facial expression, and pose. Aside from this, it can be used for low-resolution face images acquired in uncontrolled environments or in the infrared spectrum. Experimental results show that our method outperforms state-of-the-art methods on FERET and SCface datasets.WOS:000437326800174Scopus - Affiliation ID: 60105072Science Citation Index ExpandedQ2 - Q3ArticleUluslararası işbirliği ile yapılmayan - HAYIRMayıs2018YÖK - 2017-1

    Hand eye coordination in surgery

    Get PDF
    The coordination of the hand in response to visual target selection has always been regarded as an essential quality in a range of professional activities. This quality has thus far been elusive to objective scientific measurements, and is usually engulfed in the overall performance of the individuals. Parallels can be drawn to surgery, especially Minimally Invasive Surgery (MIS), where the physical constraints imposed by the arrangements of the instruments and visualisation methods require certain coordination skills that are unprecedented. With the current paradigm shift towards early specialisation in surgical training and shortened focused training time, selection process should identify trainees with the highest potentials in certain specific skills. Although significant effort has been made in objective assessment of surgical skills, it is only currently possible to measure surgeons’ abilities at the time of assessment. It has been particularly difficult to quantify specific details of hand-eye coordination and assess innate ability of future skills development. The purpose of this thesis is to examine hand-eye coordination in laboratory-based simulations, with a particular emphasis on details that are important to MIS. In order to understand the challenges of visuomotor coordination, movement trajectory errors have been used to provide an insight into the innate coordinate mapping of the brain. In MIS, novel spatial transformations, due to a combination of distorted endoscopic image projections and the “fulcrum” effect of the instruments, accentuate movement generation errors. Obvious differences in the quality of movement trajectories have been observed between novices and experts in MIS, however, this is difficult to measure quantitatively. A Hidden Markov Model (HMM) is used in this thesis to reveal the underlying characteristic movement details of a particular MIS manoeuvre and how such features are exaggerated by the introduction of rotation in the endoscopic camera. The proposed method has demonstrated the feasibility of measuring movement trajectory quality by machine learning techniques without prior arbitrary classification of expertise. Experimental results have highlighted these changes in novice laparoscopic surgeons, even after a short period of training. The intricate relationship between the hands and the eyes changes when learning a skilled visuomotor task has been previously studied. Reactive eye movement, when visual input is used primarily as a feedback mechanism for error correction, implies difficulties in hand-eye coordination. As the brain learns to adapt to this new coordinate map, eye movements then become predictive of the action generated. The concept of measuring this spatiotemporal relationship is introduced as a measure of hand-eye coordination in MIS, by comparing the Target Distance Function (TDF) between the eye fixation and the instrument tip position on the laparoscopic screen. Further validation of this concept using high fidelity experimental tasks is presented, where higher cognitive influence and multiple target selection increase the complexity of the data analysis. To this end, Granger-causality is presented as a measure of the predictability of the instrument movement with the eye fixation pattern. Partial Directed Coherence (PDC), a frequency-domain variation of Granger-causality, is used for the first time to measure hand-eye coordination. Experimental results are used to establish the strengths and potential pitfalls of the technique. To further enhance the accuracy of this measurement, a modified Jensen-Shannon Divergence (JSD) measure has been developed for enhancing the signal matching algorithm and trajectory segmentations. The proposed framework incorporates high frequency noise filtering, which represents non-purposeful hand and eye movements. The accuracy of the technique has been demonstrated by quantitative measurement of multiple laparoscopic tasks by expert and novice surgeons. Experimental results supporting visual search behavioural theory are presented, as this underpins the target selection process immediately prior to visual motor action generation. The effects of specialisation and experience on visual search patterns are also examined. Finally, pilot results from functional brain imaging are presented, where the Posterior Parietal Cortical (PPC) activation is measured using optical spectroscopy techniques. PPC has been demonstrated to involve in the calculation of the coordinate transformations between the visual and motor systems, which establishes the possibilities of exciting future studies in hand-eye coordination
    corecore