129 research outputs found

    Multimodal biometric system for ECG, ear and iris recognition based on local descriptors

    Get PDF
    © 2019, Springer Science+Business Media, LLC, part of Springer Nature. Combination of multiple information extracted from different biometric modalities in multimodal biometric recognition system aims to solve the different drawbacks encountered in a unimodal biometric system. Fusion of many biometrics has proposed such as face, fingerprint, iris…etc. Recently, electrocardiograms (ECG) have been used as a new biometric technology in unimodal and multimodal biometric recognition system. ECG provides inherent the characteristic of liveness of a person, making it hard to spoof compared to other biometric techniques. Ear biometrics present a rich and stable source of information over an acceptable period of human life. Iris biometrics have been embedded with different biometric modalities such as fingerprint, face and palm print, because of their higher accuracy and reliability. In this paper, a new multimodal biometric system based ECG-ear-iris biometrics at feature level is proposed. Preprocessing techniques including normalization and segmentation are applied to ECG, ear and iris biometrics. Then, Local texture descriptors, namely 1D-LBP (One D-Local Binary Patterns), Shifted-1D-LBP and 1D-MR-LBP (Multi-Resolution) are used to extract the important features from the ECG signal and convert the ear and iris images to a 1D signals. KNN and RBF are used for matching to classify an unknown user into the genuine or impostor. The developed system is validated using the benchmark ID-ECG and USTB1, USTB2 and AMI ear and CASIA v1 iris databases. The experimental results demonstrate that the proposed approach outperforms unimodal biometric system. A Correct Recognition Rate (CRR) of 100% is achieved with an Equal Error Rate (EER) of 0.5%

    Normalized eye movement metrics across motor simulation states: a difference of perspective?

    Get PDF
    Introduction: Eye movement metric congruency across motor simulation states is appealing for proponents of shared representation models; data supporting this contention are, however, conflicting. This study used a novel method for normalizing and analyzing gaze metrics to compare eye movements during action observation (AO) and motor imagery (MI) from allocentric and egocentric perspectives. Method: Spatial and temporal fixation data were collected as participants observed and imagined upper limb movements from two visual perspectives. The data in the four conditions were normalized for scale and orientation and segmented into three fixation point centers. Results: There were significant differences in the distribution of the means of the fixation point centers between AO and MI in the allocentric but not the egocentric perspective. Differences were also observed in the covariance of fixation-points within fixation centers between AO and MI between the two perspectives. There were also significant interactions for fixation duration and number of fixations in the two perspectives. Discussion: Eye movements across AO and MI conditions are more consistent from an egocentric perspective but information processing demand, irrespective of perspective, is reduced in MI. Differences may be due to the greater control of goal outcome in the AO, egocentric condition

    Does Race Matter? Understanding the role of social connectedness in student retention in hospitality programs

    Get PDF
    The recruitment and retention of ethnic minority students lies at the core of diversity efforts instituted by colleges and universities across the U.S. Withstanding the changing racial demographics in the U.S. and the need to have qualified ethnic minority professionals serving diverse communities, retention and matriculation heighten in importance. With the recruitment and retention challenge that many predominately White institutions (PWI’s) face in mind, this study aimed to understand how “social connectedness” related to retaining African-American students in a hospitality management program. Focus groups were utilized to chronicle the lived experience of African-American students. The findings suggest that the following factors play an important role in the retention of African American students: (1) being connected to the program, university community, and other ethnic minority students; (2) the depth and quality of relationships with faculty

    MgB2 Thin-Film Bolometer for Applications in Far-Infrared Instruments on Future Planetary Missions

    Get PDF
    A SiN membrane based MgB2 thin-film bolometer, with a non-optimized absorber, has been fabricated that shows an electrical noise equivalent power of 256 fW/square root Hz operating at 30 Hz in the 8.5 - 12.35 micron spectral bandpass. This value corresponds to an electrical specific detectivity of 7.6 x 10(exp 10) cm square root Hz/W. The bolometer shows a measured blackbody (optical) specific detectivity of 8.8 x 10(exp 9) cm square root Hz/W, with a responsivity of 701.5 kV/W and a first-order time constant of 5.2 ms. It is predicted that with the inclusion of a gold black absorber that a blackbody specific detectivity of 6.4 x 10(exp 10) cm/square root Hz/W at an operational frequency of 10 Hz, can be realized for integration into future planetary exploration instrumentation where high sensitivity is required in the 17 - 250 micron spectral wavelength range

    An investigation on local wrinkle-based extractor of age estimation

    Get PDF
    Research related to age estimation using face images has become increasingly important due to its potential use in various applications such as age group estimation in advertising and age estimation in access control. In contrast to other facial variations, age variation has several unique characteristics which make it a challenging task. As we age, the most pronounced facial changes are the appearance of wrinkles (skin creases), which is the focus of ageing research in cosmetic and nutrition studies. This paper investigates an algorithm for wrinkle detection and the use of wrinkle data as an age predictor. A novel method in detecting and classifying facial age groups based on a local wrinkle-based extractor (LOWEX) is introduced. First, each face image is divided into several convex regions representing wrinkle distribution areas. Secondly, these areas are analysed using a Canny filter and then concatenated into an enhanced feature vector. Finally, the face is classified into an age group using a supervised learning algorithm. The experimental results show that the accuracy of the proposed method is 80% when using FG-NET dataset. This investigation shows that local wrinkle-based features have great potential in age estimation. We conclude that wrinkles can produce a prominent ageing descriptor and identify some future research challenges. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved

    Wrinkle Detection Using Hessian Line Tracking

    Get PDF
    Wrinkles play an important role in face-based analysis. They have been widely used in applications such as facial retouching, facial expression recognition and face age estimation. Although a few techniques for wrinkle analysis have been explored in the literature, poor detection limits the accuracy and reliability of wrinkle segmentation. Therefore, an automated wrinkle detection method is crucial to maintain consistency and reduce human error. In this paper, we propose Hessian Line Tracking (HLT) to overcome the detection problem. HLT is composed of Hessian seeding and directional line tracking. It is an extension of a Hessian filter; however it significantly increases the accuracy of wrinkle localization when compared with existing methods. In the experimental phase, three coders were instructed to annotate wrinkles manually. To assess the manual annotation, both intra- and inter-reliability were measured, with an accuracy of 94% or above. Experimental results show that the proposed method is capable of tracking hidden pixels; thus it increases connectivity of detection between wrinkles, allowing some fine wrinkles to be detected. In comparison to the state-of-the-art methods such as the CUla Method (CUM), FRangi Filter (FRF), and Hybrid Hessian Filter (HHF), the proposed HLT yields better results, with an accuracy of 84%. This work demonstrates that HLT is a remarkably strong detector of forehead wrinkles in 2D images

    SAMM: A Spontaneous Micro-Facial Movement Dataset

    Get PDF
    Micro-facial expressions are spontaneous, involuntary movements of the face when a person experiences an emotion but attempts to hide their facial expression, most likely in a high-stakes environment. Recently, research in this field has grown in popularity, however publicly available datasets of micro-expressions have limitations due to the difficulty of naturally inducing spontaneous micro-expressions. Other issues include lighting, low resolution and low participant diversity. We present a newly developed spontaneous micro-facial movement dataset with diverse participants and coded using the Facial Action Coding System. The experimental protocol addresses the limitations of previous datasets, including eliciting emotional responses from stimuli tailored to each participant. Dataset evaluation was completed by running preliminary experiments to classify micro-movements from non-movements. Results were obtained using a selection of spatio-temporal descriptors and machine learning. We further evaluate the dataset on emerging methods of feature difference analysis and propose an Adaptive Baseline Threshold that uses individualised neutral expression to improve the performance of micro-movement detection. In contrast to machine learning approaches, we outperform the state of the art with a recall of 0.91. The outcomes show the dataset can become a new standard for micro-movement data, with future work expanding on data representation and analysis

    Micro-Facial Movements: An Investigation on Spatio-Temporal Descriptors

    Get PDF
    This paper aims to investigate whether micro-facial movement sequences can be distinguished from neutral face sequences. As a micro-facial movement tends to be very quick and subtle, classifying when a movement occurs compared to the face without movement can be a challenging computer vision problem. Using local binary patterns on three orthogonal planes and Gaussian derivatives, local features, when interpreted by machine learning algorithms, can accurately describe when a movement and non-movement occurs. This method can then be applied to help aid humans in detecting when the small movements occur. This also differs from current literature as most only concentrate in emotional expression recognition. Using the CASME II dataset, the results from the investigation of different descriptors have shown a higher accuracy compared to state-of-the-art methods

    Face Video Competition

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-01793-3_73Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realise facial video recognition, rather than resorting to just still images. In fact, facial video recognition offers many advantages over still image recognition; these include the potential of boosting the system accuracy and deterring spoof attacks. This paper presents the first known benchmarking effort of person identity verification using facial video data. The evaluation involves 18 systems submitted by seven academic institutes.The work of NPoh is supported by the advanced researcher fellowship PA0022121477of the Swiss NSF; NPoh, CHC and JK by the EU-funded Mobio project grant IST-214324; NPC and HF by the EPSRC grants EP/D056942 and EP/D054818; VS andNP by the Slovenian national research program P2-0250(C) Metrology and Biomet-ric System, the COST Action 2101 and FP7-217762 HIDE; and, AAS by the Dutch BRICKS/BSIK project.Poh, N.; Chan, C.; Kittler, J.; Marcel, S.; Mc Cool, C.; Rua, E.; Alba Castro, J.... (2009). Face Video Competition. En Advances in Biometrics: Third International Conference, ICB 2009, Alghero, Italy, June 2-5, 2009. Proceedings. 715-724. https://doi.org/10.1007/978-3-642-01793-3_73S715724Messer, K., Kittler, J., Sadeghi, M., Hamouz, M., Kostyn, A., Marcel, S., Bengio, S., Cardinaux, F., Sanderson, C., Poh, N., Rodriguez, Y., Kryszczuk, K., Czyz, J., Vandendorpe, L., Ng, J., Cheung, H., Tang, B.: Face authentication competition on the BANCA database. In: Zhang, D., Jain, A.K. (eds.) ICBA 2004. LNCS, vol. 3072, pp. 8–15. Springer, Heidelberg (2004)Messer, K., Kittler, J., Sadeghi, M., Hamouz, M., Kostin, A., Cardinaux, F., Marcel, S., Bengio, S., Sanderson, C., Poh, N., Rodriguez, Y., Czyz, J., Vandendorpe, L., McCool, C., Lowther, S., Sridharan, S., Chandran, V., Palacios, R.P., Vidal, E., Bai, L., Shen, L.-L., Wang, Y., Yueh-Hsuan, C., Liu, H.-C., Hung, Y.-P., Heinrichs, A., Muller, M., Tewes, A., vd Malsburg, C., Wurtz, R., Wang, Z., Xue, F., Ma, Y., Yang, Q., Fang, C., Ding, X., Lucey, S., Goss, R., Schneiderman, H.: Face authentication test on the BANCA database. In: Int’l. Conf. Pattern Recognition (ICPR), vol. 4, pp. 523–532 (2004)Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Chang, J., Hoffman, K., Marques, J., Min, J., Worek, W.: Overview of the Face Recognition Grand Challenge. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 947–954 (2005)Bailly-Baillière, E., Bengio, S., Bimbot, F., Hamouz, M., Kittler, J., Marithoz, J., Matas, J., Messer, K., Popovici, V., Porée, F., Ruiz, B., Thiran, J.-P.: The BANCA Database and Evaluation Protocol. In: Kittler, J., Nixon, M.S. (eds.) AVBPA 2003. LNCS, vol. 2688. Springer, Heidelberg (2003)Turk, M., Pentland, A.: Eigenfaces for Recognition. Journal of Cognitive Neuroscience 3(1), 71–86 (1991)Martin, A., Doddington, G., Kamm, T., Ordowsk, M., Przybocki, M.: The DET Curve in Assessment of Detection Task Performance. In: Proc. Eurospeech 1997, Rhodes, pp. 1895–1898 (1997)Bengio, S., Marithoz, J.: The Expected Performance Curve: a New Assessment Measure for Person Authentication. In: The Speaker and Language Recognition Workshop (Odyssey), Toledo, pp. 279–284 (2004)Poh, N., Bengio, S.: Database, Protocol and Tools for Evaluating Score-Level Fusion Algorithms in Biometric Authentication. Pattern Recognition 39(2), 223–233 (2005)Martin, A., Przybocki, M., Campbell, J.P.: The NIST Speaker Recognition Evaluation Program, ch. 8. Springer, Heidelberg (2005
    corecore