3,372 research outputs found
A 3D Face Modelling Approach for Pose-Invariant Face Recognition in a Human-Robot Environment
Face analysis techniques have become a crucial component of human-machine
interaction in the fields of assistive and humanoid robotics. However, the
variations in head-pose that arise naturally in these environments are still a
great challenge. In this paper, we present a real-time capable 3D face
modelling framework for 2D in-the-wild images that is applicable for robotics.
The fitting of the 3D Morphable Model is based exclusively on automatically
detected landmarks. After fitting, the face can be corrected in pose and
transformed back to a frontal 2D representation that is more suitable for face
recognition. We conduct face recognition experiments with non-frontal images
from the MUCT database and uncontrolled, in the wild images from the PaSC
database, the most challenging face recognition database to date, showing an
improved performance. Finally, we present our SCITOS G5 robot system, which
incorporates our framework as a means of image pre-processing for face
analysis
Side-View Face Recognition
Side-view face recognition is a challenging problem with many applications. Especially in real-life scenarios where the environment is uncontrolled, coping with pose variations up to side-view positions is an important task for face recognition. In this paper we discuss the use of side view face recognition techniques to be used in house safety applications. Our aim is to recognize people as they pass through a door, and estimate their location in the house. Here, we compare available databases appropriate for this task, and review current methods for profile face recognition
Unobtrusive and pervasive video-based eye-gaze tracking
Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe
STV-based Video Feature Processing for Action Recognition
In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end
Fast and Accurate Algorithm for Eye Localization for Gaze Tracking in Low Resolution Images
Iris centre localization in low-resolution visible images is a challenging
problem in computer vision community due to noise, shadows, occlusions, pose
variations, eye blinks, etc. This paper proposes an efficient method for
determining iris centre in low-resolution images in the visible spectrum. Even
low-cost consumer-grade webcams can be used for gaze tracking without any
additional hardware. A two-stage algorithm is proposed for iris centre
localization. The proposed method uses geometrical characteristics of the eye.
In the first stage, a fast convolution based approach is used for obtaining the
coarse location of iris centre (IC). The IC location is further refined in the
second stage using boundary tracing and ellipse fitting. The algorithm has been
evaluated in public databases like BioID, Gi4E and is found to outperform the
state of the art methods.Comment: 12 pages, 10 figures, IET Computer Vision, 201
Palmprint Recognition in Uncontrolled and Uncooperative Environment
Online palmprint recognition and latent palmprint identification are two
branches of palmprint studies. The former uses middle-resolution images
collected by a digital camera in a well-controlled or contact-based environment
with user cooperation for commercial applications and the latter uses
high-resolution latent palmprints collected in crime scenes for forensic
investigation. However, these two branches do not cover some palmprint images
which have the potential for forensic investigation. Due to the prevalence of
smartphone and consumer camera, more evidence is in the form of digital images
taken in uncontrolled and uncooperative environment, e.g., child pornographic
images and terrorist images, where the criminals commonly hide or cover their
face. However, their palms can be observable. To study palmprint identification
on images collected in uncontrolled and uncooperative environment, a new
palmprint database is established and an end-to-end deep learning algorithm is
proposed. The new database named NTU Palmprints from the Internet (NTU-PI-v1)
contains 7881 images from 2035 palms collected from the Internet. The proposed
algorithm consists of an alignment network and a feature extraction network and
is end-to-end trainable. The proposed algorithm is compared with the
state-of-the-art online palmprint recognition methods and evaluated on three
public contactless palmprint databases, IITD, CASIA, and PolyU and two new
databases, NTU-PI-v1 and NTU contactless palmprint database. The experimental
results showed that the proposed algorithm outperforms the existing palmprint
recognition methods.Comment: Accepted in the IEEE Transactions on Information Forensics and
Securit
- ā¦