30,737 research outputs found
A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition
The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum
produces images that have improved face recognition performance under varying lighting conditions.
This is because long-wave infrared images are the result of emitted, rather than reflected,
light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images
have also improved face recognition under varying pose and lighting. The opacity of glass to
long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces
the recognition performance.
This thesis presents the design and performance evaluation of a novel camera system which is
capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth
video images via a common optical path requiring no spatial registration between sensors beyond
scaling for differences in sensor sizes. Experiments using a range of established face recognition
methods and multi-class SVM classifiers show that the fused output from our camera system not
only outperforms the single modality images for face recognition, but that the adaptive fusion
methods used produce consistent increases in recognition accuracy under varying pose, lighting
and with the presence of eyeglasses
A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition
The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum
produces images that have improved face recognition performance under varying lighting conditions.
This is because long-wave infrared images are the result of emitted, rather than reflected,
light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images
have also improved face recognition under varying pose and lighting. The opacity of glass to
long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces
the recognition performance.
This thesis presents the design and performance evaluation of a novel camera system which is
capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth
video images via a common optical path requiring no spatial registration between sensors beyond
scaling for differences in sensor sizes. Experiments using a range of established face recognition
methods and multi-class SVM classifiers show that the fused output from our camera system not
only outperforms the single modality images for face recognition, but that the adaptive fusion
methods used produce consistent increases in recognition accuracy under varying pose, lighting
and with the presence of eyeglasses
MobiBits: Multimodal Mobile Biometric Database
This paper presents a novel database comprising representations of five
different biometric characteristics, collected in a mobile, unconstrained or
semi-constrained setting with three different mobile devices, including
characteristics previously unavailable in existing datasets, namely hand
images, thermal hand images, and thermal face images, all acquired with a
mobile, off-the-shelf device. In addition to this collection of data we perform
an extensive set of experiments providing insight on benchmark recognition
performance that can be achieved with these data, carried out with existing
commercial and academic biometric solutions. This is the first known to us
mobile biometric database introducing samples of biometric traits such as
thermal hand images and thermal face images. We hope that this contribution
will make a valuable addition to the already existing databases and enable new
experiments and studies in the field of mobile authentication. The MobiBits
database is made publicly available to the research community at no cost for
non-commercial purposes.Comment: Submitted for the BIOSIG2018 conference on June 18, 2018. Accepted
for publication on July 20, 201
- …