13,641 research outputs found
Evaluation of different chrominance models in the detection and reconstruction of faces and hands using the growing neural gas network
Physical traits such as the shape of the hand and face can be used for human recognition and identification in video surveillance systems and in biometric authentication smart card systems, as well as in personal health care. However, the accuracy of such systems suffers from illumination changes, unpredictability, and variability in appearance (e.g. occluded faces or hands, cluttered backgrounds, etc.). This work evaluates different statistical and chrominance models in different environments with increasingly cluttered backgrounds where changes in lighting are common and with no occlusions applied, in order to get a reliable neural network reconstruction of faces and hands, without taking into account the structural and temporal kinematics of the hands. First a statistical model is used for skin colour segmentation to roughly locate hands and faces. Then a neural network is used to reconstruct in 3D the hands and faces. For the filtering and the reconstruction we have used the growing neural gas algorithm which can preserve the topology of an object without restarting the learning process. Experiments conducted on our own database but also on four benchmark databases (Stirling’s, Alicante, Essex, and Stegmann’s) and on deaf individuals from normal 2D videos are freely available on the BSL signbank dataset. Results demonstrate the validity of our system to solve problems of face and hand segmentation and reconstruction under different environmental conditions
Modelling and tracking objects with a topology preserving self-organising neural network
Human gestures form an integral part in our everyday communication. We use
gestures not only to reinforce meaning, but also to describe the shape of objects,
to play games, and to communicate in noisy environments. Vision systems that
exploit gestures are often limited by inaccuracies inherent in handcrafted models.
These models are generated from a collection of training examples which requires
segmentation and alignment. Segmentation in gesture recognition typically involves manual intervention, a time consuming process that is feasible only for a
limited set of gestures. Ideally gesture models should be automatically acquired
via a learning scheme that enables the acquisition of detailed behavioural knowledge only from topological and temporal observation.
The research described in this thesis is motivated by a desire to provide a framework for the unsupervised acquisition and tracking of gesture models. In any
learning framework, the initialisation of the shapes is very crucial. Hence, it would
be beneficial to have a robust model not prone to noise that can automatically correspond the set of shapes. In the first part of this thesis, we develop a framework
for building statistical 2D shape models by extracting, labelling and corresponding
landmark points using only topological relations derived from competitive hebbian learning. The method is based on the assumption that correspondences can
be addressed as an unsupervised classification problem where landmark points
are the cluster centres (nodes) in a high-dimensional vector space. The approach
is novel in that the network can be used in cases where the topological structure of
the input pattern is not known a priori thus no topology of fixed dimensionality is imposed onto the network.
In the second part, we propose an approach to minimise the user intervention
in the adaptation process, which requires to specify a priori the number of nodes
needed to represent an object, by utilising an automatic criterion for maximum
node growth. Furthermore, this model is used to represent motion in image sequences by initialising a suitable segmentation that separates the object of interest
from the background. The segmentation system takes into consideration some illumination tolerance, images as inputs from ordinary cameras and webcams, some
low to medium cluttered background avoiding extremely cluttered backgrounds,
and that the objects are at close range from the camera.
In the final part, we extend the framework for the automatic modelling and
unsupervised tracking of 2D hand gestures in a sequence of k frames. The aim
is to use the tracked frames as training examples in order to build the model and
maintain correspondences. To do that we add an active step to the Growing Neural Gas (GNG) network, which we call Active Growing Neural Gas (A-GNG) that
takes into consideration not only the geometrical position of the nodes, but also the
underlined local feature structure of the image, and the distance vector between
successive images. The quality of our model is measured through the calculation
of the topographic product. The topographic product is our topology preserving
measure which quantifies the neighbourhood preservation.
In our system we have applied specific restrictions in the velocity and the appearance of the gestures to simplify the difficulty of the motion analysis in the gesture representation. The proposed framework has been validated on applications
related to sign language. The work has great potential in Virtual Reality (VR) applications where the learning and the representation of gestures becomes natural
without the need of expensive wear cable sensors
Fast 2D/3D object representation with growing neural gas
This work presents the design of a real-time system to model visual objects with the use of self-organising networks. The architecture of the system addresses multiple computer vision tasks such as image segmentation, optimal parameter estimation and object representation. We first develop a framework for building non-rigid shapes using the growth mechanism of the self-organising maps, and then we define an optimal number of nodes without overfitting or underfitting the network based on the knowledge obtained from information-theoretic considerations. We present experimental results for hands and faces, and we quantitatively evaluate the matching capabilities of the proposed method with the topographic product. The proposed method is easily extensible to 3D objects, as it offers similar features for efficient mesh reconstruction
Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos
Wearable cameras stand out as one of the most promising devices for the
upcoming years, and as a consequence, the demand of computer algorithms to
automatically understand the videos recorded with them is increasing quickly.
An automatic understanding of these videos is not an easy task, and its mobile
nature implies important challenges to be faced, such as the changing light
conditions and the unrestricted locations recorded. This paper proposes an
unsupervised strategy based on global features and manifold learning to endow
wearable cameras with contextual information regarding the light conditions and
the location captured. Results show that non-linear manifold methods can
capture contextual patterns from global features without compromising large
computational resources. The proposed strategy is used, as an application case,
as a switching mechanism to improve the hand-detection problem in egocentric
videos.Comment: Submitted for publicatio
Robust modelling and tracking of NonRigid objects using Active-GNG
This paper presents a robust approach to nonrigid modelling and tracking. The contour of the object is described by an active growing neural gas (A-GNG) network which allows the model to re-deform locally. The approach is novel in that the nodes of the network are described by their geometrical position, the underlying local feature structure of the image, and the distance vector between the modal image and any successive images. A second contribution is the correspondence of the nodes which is measured through the calculation of the topographic product, a topology preserving objective function which quantifies the neighbourhood preservation before and after the mapping. As a result, we can achieve the automatic modelling and tracking of objects without using any annotated training sets. Experimental results have shown the superiority of our proposed method over the original growing neural gas (GNG) network
Airborne chemical sensing with mobile robots
Airborne chemical sensing with mobile robots has been an active research areasince the beginning of the 1990s. This article presents a review of research work in this field,including gas distribution mapping, trail guidance, and the different subtasks of gas sourcelocalisation. Due to the difficulty of modelling gas distribution in a real world environmentwith currently available simulation techniques, we focus largely on experimental work and donot consider publications that are purely based on simulations
- …