1,477 research outputs found

    Using biometrics authentication via fingerprint recognition in e-Exams in e-Learning environment

    No full text
    E-learning is a great opportunity for modern life. Notably, however, the tool needs to be coupled with efficient and reliable security mechanisms to ensure the medium can be established as a dependable one. Authentication of e-exam takers is of prime importance so that exams are given by fair means. A new approach shall be proposed so as to ensure that no unauthorised individuals are permitted to give the exams

    Terahertz Security Image Quality Assessment by No-reference Model Observers

    Full text link
    To provide the possibility of developing objective image quality assessment (IQA) algorithms for THz security images, we constructed the THz security image database (THSID) including a total of 181 THz security images with the resolution of 127*380. The main distortion types in THz security images were first analyzed for the design of subjective evaluation criteria to acquire the mean opinion scores. Subsequently, the existing no-reference IQA algorithms, which were 5 opinion-aware approaches viz., NFERM, GMLF, DIIVINE, BRISQUE and BLIINDS2, and 8 opinion-unaware approaches viz., QAC, SISBLIM, NIQE, FISBLIM, CPBD, S3 and Fish_bb, were executed for the evaluation of the THz security image quality. The statistical results demonstrated the superiority of Fish_bb over the other testing IQA approaches for assessing the THz image quality with PLCC (SROCC) values of 0.8925 (-0.8706), and with RMSE value of 0.3993. The linear regression analysis and Bland-Altman plot further verified that the Fish__bb could substitute for the subjective IQA. Nonetheless, for the classification of THz security images, we tended to use S3 as a criterion for ranking THz security image grades because of the relatively low false positive rate in classifying bad THz image quality into acceptable category (24.69%). Interestingly, due to the specific property of THz image, the average pixel intensity gave the best performance than the above complicated IQA algorithms, with the PLCC, SROCC and RMSE of 0.9001, -0.8800 and 0.3857, respectively. This study will help the users such as researchers or security staffs to obtain the THz security images of good quality. Currently, our research group is attempting to make this research more comprehensive.Comment: 13 pages, 8 figures, 4 table

    Image quality assessment for iris biometric

    Get PDF
    Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is the most reliable biometric in terms of recognition and identification performance. However, performance of these systems is affected by poor quality imaging. In this work, we extend previous research efforts on iris quality assessment by analyzing the effect of seven quality factors: defocus blur, motion blur, off-angle, occlusion, specular reflection, lighting, and pixel-counts on the performance of traditional iris recognition system. We have concluded that defocus blur, motion blur, and off-angle are the factors that affect recognition performance the most. We further designed a fully automated iris image quality evaluation block that operates in two steps. First each factor is estimated individually, then the second step involves fusing the estimated factors by using Dempster-Shafer theory approach to evidential reasoning. The designed block is tested on two datasets, CASIA 1.0 and a dataset collected at WVU. (Abstract shortened by UMI.)

    CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping

    Get PDF
    With the increasing prevalence of mobile and IoT devices (e.g., smartphones, tablets, smart-home appliances), massive private and sensitive information are stored on these devices. To prevent unauthorized access on these devices, existing user verification solutions either rely on the complexity of user-defined secrets (e.g., password) or resort to specialized biometric sensors (e.g., fingerprint reader), but the users may still suffer from various attacks, such as password theft, shoulder surfing, smudge, and forged biometrics attacks. In this paper, we propose, CardioCam, a low-cost, general, hard-to-forge user verification system leveraging the unique cardiac biometrics extracted from the readily available built-in cameras in mobile and IoT devices. We demonstrate that the unique cardiac features can be extracted from the cardiac motion patterns in fingertips, by pressing on the built-in camera. To mitigate the impacts of various ambient lighting conditions and human movements under practical scenarios, CardioCam develops a gradient-based technique to optimize the camera configuration, and dynamically selects the most sensitive pixels in a camera frame to extract reliable cardiac motion patterns. Furthermore, the morphological characteristic analysis is deployed to derive user-specific cardiac features, and a feature transformation scheme grounded on Principle Component Analysis (PCA) is developed to enhance the robustness of cardiac biometrics for effective user verification. With the prototyped system, extensive experiments involving 25 subjects are conducted to demonstrate that CardioCam can achieve effective and reliable user verification with over 99% average true positive rate (TPR) while maintaining the false positive rate (FPR) as low as 4%

    Textural features for fingerprint liveness detection

    Get PDF
    The main topic ofmy research during these three years concerned biometrics and in particular the Fingerprint Liveness Detection (FLD), namely the recognition of fake fingerprints. Fingerprints spoofing is a topical issue as evidenced by the release of the latest iPhone and Samsung Galaxy models with an embedded fingerprint reader as an alternative to passwords. Several videos posted on YouTube show how to violate these devices by using fake fingerprints which demonstrated how the problemof vulnerability to spoofing constitutes a threat to the existing fingerprint recognition systems. Despite the fact that many algorithms have been proposed so far, none of them showed the ability to clearly discriminate between real and fake fingertips. In my work, after a study of the state-of-the-art I paid a special attention on the so called textural algorithms. I first used the LBP (Local Binary Pattern) algorithm and then I worked on the introduction of the LPQ (Local Phase Quantization) and the BSIF (Binarized Statistical Image Features) algorithms in the FLD field. In the last two years I worked especially on what we called the “user specific” problem. In the extracted features we noticed the presence of characteristic related not only to the liveness but also to the different users. We have been able to improve the obtained results identifying and removing, at least partially, this user specific characteristic. Since 2009 the Department of Electrical and Electronic Engineering of the University of Cagliari and theDepartment of Electrical and Computer Engineering of the ClarksonUniversity have organized the Fingerprint Liveness Detection Competition (LivDet). I have been involved in the organization of both second and third editions of the Fingerprint Liveness Detection Competition (LivDet 2011 and LivDet 2013) and I am currently involved in the acquisition of live and fake fingerprint that will be inserted in three of the LivDet 2015 datasets

    Performance Analysis of No Reference Image quality based on Human Perception

    Get PDF
    In this work, a No-Reference objective image quality assessment based on NRDPF-IQA metric and classification based metric are tested using LIVE database, which consisting of Gaussian white noise, Gaussian blur, Rayleigh fast fading channel, JPEG compressed images, JPEG2000 images. We plot the Spearman’s Rank Order Correlation Coefficient [SROCC] between each of these features and human DMOS from the LIVE-IQA database using our proposed method to ascertain how well the features correlate with human judgement quality. The analysis of the testing and training is done by SVM model. The proposed method shows better results compared with the earlier methods. Finally, the results are generated by using MATLAB.DOI:http://dx.doi.org/10.11591/ijece.v4i6.678

    Imaging time series for the classification of EMI discharge sources

    Get PDF
    In this work, we aim to classify a wider range of Electromagnetic Interference (EMI) discharge sources collected from new power plant sites across multiple assets. This engenders a more complex and challenging classification task. The study involves an investigation and development of new and improved feature extraction and data dimension reduction algorithms based on image processing techniques. The approach is to exploit the Gramian Angular Field technique to map the measured EMI time signals to an image, from which the significant information is extracted while removing redundancy. The image of each discharge type contains a unique fingerprint. Two feature reduction methods called the Local Binary Pattern (LBP) and the Local Phase Quantisation (LPQ) are then used within the mapped images. This provides feature vectors that can be implemented into a Random Forest (RF) classifier. The performance of a previous and the two new proposed methods, on the new database set, is compared in terms of classification accuracy, precision, recall, and F-measure. Results show that the new methods have a higher performance than the previous one, where LBP features achieve the best outcome

    A Longitudinal Analysis on the Feasibility of Iris Recognition Performance for Infants 0-2 Years Old

    Get PDF
    The focus of this study was to longitudinally evaluate iris recognition for infants between the ages of 0 to 2 years old. Image quality metrics of infant and adult irises acquired on the same iris camera were compared. Matching performance was evaluated for four groups, infants 0 to 6 months, 7 to 12 months, 13 to 24 months, and adults. A mixed linear regression model was used to determine if infants’ genuine similarity scores changed over time. This study found that image quality metrics were different between infants and adults but in the older group, (13 to 24 months old) the image quality metric scores were more likely to be similar to adults. Infants 0 to 6 months old had worse performance at an FMR of 0.01% than infants 7 to 12 months, 13 to 24 months, and adults

    Facial Image Verification and Quality Assessment System -FaceIVQA

    Get PDF
    Although several techniques have been proposed for predicting biometric system performance using quality values, many of the research works were based on no-reference assessment technique using a single quality attribute measured directly from the data. These techniques have proved to be inappropriate for facial verification scenarios and inefficient because no single quality attribute can sufficient measure the quality of a facial image. In this research work, a facial image verification and quality assessment framework (FaceIVQA) was developed. Different algorithms and methods were implemented in FaceIVQA to extract the faceness, pose, illumination, contrast and similarity quality attributes using an objective full-reference image quality assessment approach. Structured image verification experiments were conducted on the surveillance camera (SCface) database to collect individual quality scores and algorithm matching scores from FaceIVQA using three recognition algorithms namely principal component analysis (PCA), linear discriminant analysis (LDA) and a commercial recognition SDK. FaceIVQA produced accurate and consistent facial image assessment data. The Result shows that it accurately assigns quality scores to probe image samples. The resulting quality score can be assigned to images captured for enrolment or recognition and can be used as an input to quality-driven biometric fusion systems.DOI:http://dx.doi.org/10.11591/ijece.v3i6.503
    • …
    corecore