4,139 research outputs found

    Gender and Ethnicity Classification based on Palmprint and Palmar Hand Images from Uncontrolled Environment

    Full text link
    Soft biometric attributes such as gender, ethnicity or age may provide useful information for biometrics and forensics applications. Researchers used, e.g., face, gait, iris, and hand, etc. to classify such attributes. Even though hand has been widely studied for biometric recognition, relatively less attention has been given to soft biometrics from hand. Previous studies of soft biometrics based on hand images focused on gender and well-controlled imaging environment. In this paper, the gender and ethnicity classification in uncontrolled environment are considered. Gender and ethnicity labels are collected and provided for subjects in a publicly available database, which contains hand images from the Internet. Five deep learning models are fine-tuned and evaluated in gender and ethnicity classification scenarios based on palmar 1) full hand, 2) segmented hand and 3) palmprint images. The experimental results indicate that for gender and ethnicity classification in uncontrolled environment, full and segmented hand images are more suitable than palmprint images.Comment: Accepted in the International Joint Conference on Biometrics (IJCB 2020), scheduled for Sep 28-Oct 1, 202

    Palmprint Identification Based on Generalization of IrisCode

    Get PDF
    The development of accurate and reliable security systems is a matter of wide interest, and in this context biometrics is seen as a highly effective automatic mechanism for personal identification. Among biometric technologies, IrisCode developed by Daugman in 1993 is regarded as a highly accurate approach, being able to support real-time personal identification of large databases. Since 1993, on the top of IrisCode, different coding methods have been proposed for iris and fingerprint identification. In this research, I extend and generalize IrisCode for real-time secure palmprint identification. PalmCode, the first coding method for palmprint identification developed by me in 2002, directly applied IrisCode to extract phase information of palmprints as features. However, I observe that the PalmCodes from the different palms are similar, having many 45o streaks. Such structural similarities in the PalmCodes of different palms would reduce the individuality of PalmCodes and the performance of palmprint identification systems. To reduce the correlation between PalmCodes, in this thesis, I employ multiple elliptical Gabor filters with different orientations to compute different PalmCodes and merge them to produce a single feature, called Fusion Code. Experimental results demonstrate that Fusion Code performs better than PalmCode. Based on the results of Fusion Code, I further identify that the orientation fields of palmprints are powerful features. Consequently, Competitive Code, which uses real parts of six Gabor filters to estimate the orientation fields, is developed. To embed the properties of IrisCode, such as high speed matching, in Competitive Code, a novel coding scheme and a bitwise angular distance are proposed. Experimental results demonstrate that Competitive Code is much more effective than other palmprint algorithms. Although many coding methods have been developed based on IrisCode for iris and palmprint identification, we lack a detailed analysis of IrisCode. One of the aims of this research is to provide such analysis as a way of better understanding IrisCode, extending the coarse phase representation to a precise phase representation, and uncovering the relationship between IrisCode and other coding methods. This analysis demonstrates that IrisCode is a clustering process with four prototypes; the locus of a Gabor function is a two-dimensional ellipse with respect to a phase parameter and the bitwise hamming distance can be regarded as a bitwise angular distance. In this analysis, I also point out that the theoretical evidence of the imposter binomial distribution of IrisCode is incomplete. I use this analysis to develop a precise phase representation which can enhance iris recognition accuracy and to relate IrisCode and other coding methods. By making use of this analysis, principal component analysis and simulated annealing, near optimal filters for palmprint identification are sought. The near optimal filters perform better than Competitive Code in term of d’ index. Identical twins having the closest genetics-based relationship are expected to have maximum similarity in their biometrics. Classifying identical twins is a challenging problem for some automatic biometric systems. Palmprint has been studied for personal identification for many years. However, genetically identical palmprints have not been studied. I systemically examine Competitive Code on genetically identical palmprints for automatic personal identification and to uncover the genetically related palmprint features. The experimental results show that the three principal lines and some portions of weak lines are genetically related features but our palms still contain rich genetically unrelated features for classifying identical twins. As biometric systems are vulnerable to replay, database and brute-force attacks, such potential attacks must be analyzed before they are massively deployed in security systems. I propose projected multinomial distribution for studying the probability of successfully using brute-force attacks to break into a palmprint system based on Competitive Code. The proposed model indicates that it is computationally infeasible to break into the palmprint system using brute-force attacks. In addition to brute-force attacks, I address the other three security issues: template re-issuances, also called cancellable biometrics, replay attacks, and database attacks. A random orientation filter bank (ROFB) is used to generate cancellable Competitive Codes for templates re-issuances. Secret messages are hidden in templates to prevent replay and database attacks. This technique can be regarded as template watermarking. A series of analyses is provided to evaluate the security levels of the measures

    Engaging Students with Disabilities in Virtual Learning

    Get PDF
    This professional learning module was designed to help teachers of students with disabilities (SWDs) who are navigating the issues related to virtual learning. The authors have experience in being teachers of SWDs, administrators, and lead teachers. We witnessed the struggles teachers had during the pandemic in engaging SWDs and their parents in virtual learning. This module was designed to help teachers alleviate some of those struggles. Districts might find it beneficial to use this PLM in training all teachers about engaging SWDs virtually as the world of education is leaning in the direction of blended learning, virtual academies, and traditional face to face learning. View professional learning module.https://digitalcommons.gardner-webb.edu/improve/1027/thumbnail.jp

    TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition

    Full text link
    Text-driven diffusion models have exhibited impressive generative capabilities, enabling various image editing tasks. In this paper, we propose TF-ICON, a novel Training-Free Image COmpositioN framework that harnesses the power of text-driven diffusion models for cross-domain image-guided composition. This task aims to seamlessly integrate user-provided objects into a specific visual context. Current diffusion-based methods often involve costly instance-based optimization or finetuning of pretrained models on customized datasets, which can potentially undermine their rich prior. In contrast, TF-ICON can leverage off-the-shelf diffusion models to perform cross-domain image-guided composition without requiring additional training, finetuning, or optimization. Moreover, we introduce the exceptional prompt, which contains no information, to facilitate text-driven diffusion models in accurately inverting real images into latent representations, forming the basis for compositing. Our experiments show that equipping Stable Diffusion with the exceptional prompt outperforms state-of-the-art inversion methods on various datasets (CelebA-HQ, COCO, and ImageNet), and that TF-ICON surpasses prior baselines in versatile visual domains. Code is available at https://github.com/Shilin-LU/TF-ICONComment: Accepted by ICCV202

    Palmprint Recognition in Uncontrolled and Uncooperative Environment

    Full text link
    Online palmprint recognition and latent palmprint identification are two branches of palmprint studies. The former uses middle-resolution images collected by a digital camera in a well-controlled or contact-based environment with user cooperation for commercial applications and the latter uses high-resolution latent palmprints collected in crime scenes for forensic investigation. However, these two branches do not cover some palmprint images which have the potential for forensic investigation. Due to the prevalence of smartphone and consumer camera, more evidence is in the form of digital images taken in uncontrolled and uncooperative environment, e.g., child pornographic images and terrorist images, where the criminals commonly hide or cover their face. However, their palms can be observable. To study palmprint identification on images collected in uncontrolled and uncooperative environment, a new palmprint database is established and an end-to-end deep learning algorithm is proposed. The new database named NTU Palmprints from the Internet (NTU-PI-v1) contains 7881 images from 2035 palms collected from the Internet. The proposed algorithm consists of an alignment network and a feature extraction network and is end-to-end trainable. The proposed algorithm is compared with the state-of-the-art online palmprint recognition methods and evaluated on three public contactless palmprint databases, IITD, CASIA, and PolyU and two new databases, NTU-PI-v1 and NTU contactless palmprint database. The experimental results showed that the proposed algorithm outperforms the existing palmprint recognition methods.Comment: Accepted in the IEEE Transactions on Information Forensics and Securit

    High Redshift AGN: Accretion Rates and Morphologies for X-ray and Radio SC4K Sources from z~2 to z~6

    Get PDF
    We study a large sample of ~4000 Lyα Emitters (LAEs) and identify the active galactic nuclei (AGN) among them in order to characterise their evolution across cosmic time. This work was carried out using the SC4K survey (Sobral et al. 2018) and data collected by the Hubble Space Telescope (HST), Chandra X-ray Observatory and the Very Large Array (VLA). We find 322 X-ray or radio detected AGN within the sample, constituting 8.7±0.5% of the sources considered. We find that the vast majority of classifiable AGN (81±3%) are point-like or compact sources in the rest-frame UV seen with HST, and this qualitative trend holds regardless of detection band or redshift. These AGN have a range of black hole accretion rates (BHARs), and we present the first direct comparison between radio and X-ray BHARs. X-ray calculated BHARs range from ~0.07 M⊙/yr to ~23 M⊙/yr, indicating a highly varied sample, with some very active AGN detected. Radio calculated BHARs range from ~0.09 M⊙/yr to ~8.8 M⊙/yr, broadly tracing the same range as the X-ray calculated BHARs. X-ray calculated BHARs peak at z~3 and both radio and X-ray calculated BHARs increase with increasing redshift, plateauing at z~4. We find significantly less variation in radio BHARs when compared to X-ray BHARs, indicating radio may be a far more stable and reliable method of calculating the BHARs of AGN over large timescales, while X-ray is more suitable for instantaneous BHARs
    • …
    corecore