67 research outputs found

    Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

    Full text link
    Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location captured. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. The proposed strategy is used, as an application case, as a switching mechanism to improve the hand-detection problem in egocentric videos.Comment: Submitted for publicatio

    3D Hand Pose Estimation with Neural Networks

    Get PDF
    We propose the design of a real-time system to recognize and interprethand gestures. The acquisition devices are low cost 3D sensors. 3D hand pose will be segmented, characterized and track using growing neural gas (GNG) structure. The capacity of the system to obtain information with a high degree of freedom allows the encoding of many gestures and a very accurate motion capture. The use of hand pose models combined with motion information provide with GNG permits to deal with the problem of the hand motion representation. A natural interface applied to a virtual mirrorwriting system and to a system to estimate hand pose will be designed to demonstrate the validity of the system

    Evaluation of different chrominance models in the detection and reconstruction of faces and hands using the growing neural gas network

    Get PDF
    Physical traits such as the shape of the hand and face can be used for human recognition and identification in video surveillance systems and in biometric authentication smart card systems, as well as in personal health care. However, the accuracy of such systems suffers from illumination changes, unpredictability, and variability in appearance (e.g. occluded faces or hands, cluttered backgrounds, etc.). This work evaluates different statistical and chrominance models in different environments with increasingly cluttered backgrounds where changes in lighting are common and with no occlusions applied, in order to get a reliable neural network reconstruction of faces and hands, without taking into account the structural and temporal kinematics of the hands. First a statistical model is used for skin colour segmentation to roughly locate hands and faces. Then a neural network is used to reconstruct in 3D the hands and faces. For the filtering and the reconstruction we have used the growing neural gas algorithm which can preserve the topology of an object without restarting the learning process. Experiments conducted on our own database but also on four benchmark databases (Stirling’s, Alicante, Essex, and Stegmann’s) and on deaf individuals from normal 2D videos are freely available on the BSL signbank dataset. Results demonstrate the validity of our system to solve problems of face and hand segmentation and reconstruction under different environmental conditions

    Fast 2D/3D object representation with growing neural gas

    Get PDF
    This work presents the design of a real-time system to model visual objects with the use of self-organising networks. The architecture of the system addresses multiple computer vision tasks such as image segmentation, optimal parameter estimation and object representation. We first develop a framework for building non-rigid shapes using the growth mechanism of the self-organising maps, and then we define an optimal number of nodes without overfitting or underfitting the network based on the knowledge obtained from information-theoretic considerations. We present experimental results for hands and faces, and we quantitatively evaluate the matching capabilities of the proposed method with the topographic product. The proposed method is easily extensible to 3D objects, as it offers similar features for efficient mesh reconstruction

    Modelling and tracking objects with a topology preserving self-organising neural network

    Get PDF
    Human gestures form an integral part in our everyday communication. We use gestures not only to reinforce meaning, but also to describe the shape of objects, to play games, and to communicate in noisy environments. Vision systems that exploit gestures are often limited by inaccuracies inherent in handcrafted models. These models are generated from a collection of training examples which requires segmentation and alignment. Segmentation in gesture recognition typically involves manual intervention, a time consuming process that is feasible only for a limited set of gestures. Ideally gesture models should be automatically acquired via a learning scheme that enables the acquisition of detailed behavioural knowledge only from topological and temporal observation. The research described in this thesis is motivated by a desire to provide a framework for the unsupervised acquisition and tracking of gesture models. In any learning framework, the initialisation of the shapes is very crucial. Hence, it would be beneficial to have a robust model not prone to noise that can automatically correspond the set of shapes. In the first part of this thesis, we develop a framework for building statistical 2D shape models by extracting, labelling and corresponding landmark points using only topological relations derived from competitive hebbian learning. The method is based on the assumption that correspondences can be addressed as an unsupervised classification problem where landmark points are the cluster centres (nodes) in a high-dimensional vector space. The approach is novel in that the network can be used in cases where the topological structure of the input pattern is not known a priori thus no topology of fixed dimensionality is imposed onto the network. In the second part, we propose an approach to minimise the user intervention in the adaptation process, which requires to specify a priori the number of nodes needed to represent an object, by utilising an automatic criterion for maximum node growth. Furthermore, this model is used to represent motion in image sequences by initialising a suitable segmentation that separates the object of interest from the background. The segmentation system takes into consideration some illumination tolerance, images as inputs from ordinary cameras and webcams, some low to medium cluttered background avoiding extremely cluttered backgrounds, and that the objects are at close range from the camera. In the final part, we extend the framework for the automatic modelling and unsupervised tracking of 2D hand gestures in a sequence of k frames. The aim is to use the tracked frames as training examples in order to build the model and maintain correspondences. To do that we add an active step to the Growing Neural Gas (GNG) network, which we call Active Growing Neural Gas (A-GNG) that takes into consideration not only the geometrical position of the nodes, but also the underlined local feature structure of the image, and the distance vector between successive images. The quality of our model is measured through the calculation of the topographic product. The topographic product is our topology preserving measure which quantifies the neighbourhood preservation. In our system we have applied specific restrictions in the velocity and the appearance of the gestures to simplify the difficulty of the motion analysis in the gesture representation. The proposed framework has been validated on applications related to sign language. The work has great potential in Virtual Reality (VR) applications where the learning and the representation of gestures becomes natural without the need of expensive wear cable sensors

    Performance Evaluation of a Statistical and a Neural Network Model for Nonrigid Shape-Based Registration

    Get PDF
    Shape-based registration methods frequently encounters in the domains of computer vision, image processing and medical imaging. The registration problem is to find an optimal transformation/mapping between sets of rigid or nonrigid objects and to automatically solve for correspondences. In this paper we present a comparison of two different probabilistic methods, the entropy and the growing neural gas network (GNG), as general feature-based registration algorithms. Using entropy shape modelling is performed by connecting the point sets with the highest probability of curvature information, while with GNG the points sets are connected using nearest-neighbour relationships derived from competitive hebbian learning. In order to compare performances we use different levels of shape deformation starting with a simple shape 2D MRI brain ventricles and moving to more complicated shapes like hands. Results both quantitatively and qualitatively are given for both sets

    Growing Neural Gas with Different Topologies for 3D Space Perception

    Get PDF
    Three-dimensional space perception is one of the most important capabilities for an autonomous mobile robot in order to operate a task in an unknown environment adaptively since the autonomous robot needs to detect the target object and estimate the 3D pose of the target object for performing given tasks efficiently. After the 3D point cloud is measured by an RGB-D camera, the autonomous robot needs to reconstruct a structure from the 3D point cloud with color information according to the given tasks since the point cloud is unstructured data. For reconstructing the unstructured point cloud, growing neural gas (GNG) based methods have been utilized in many research studies since GNG can learn the data distribution of the point cloud appropriately. However, the conventional GNG based methods have unsolved problems about the scalability and multi-viewpoint clustering. In this paper, therefore, we propose growing neural gas with different topologies (GNG-DT) as a new topological structure learning method for solving the problems. GNG-DT has multiple topologies of each property, while the conventional GNG method has a single topology of the input vector. In addition, the distance measurement in the winner node selection uses only the position information for preserving the environmental space of the point cloud. Next, we show several experimental results of the proposed method using simulation and RGB-D datasets measured by Kinect. In these experiments, we verified that our proposed method almost outperforms the other methods from the viewpoint of the quantization and clustering errors. Finally, we summarize our proposed method and discuss the future direction on this research

    Gesture-Based Robot Path Shaping

    Get PDF
    For many individuals, aging is frequently associated with diminished mobility and dexterity. Such decreases may be accompanied by a loss of independence, increased burden to caregivers, or institutionalization. It is foreseen that the ability to retain independence and quality of life as one ages will increasingly depend on environmental sensing and robotics which facilitate aging in place. The development of ubiquitous sensing strategies in the home underpins the promise of adaptive services, assistive robotics, and architectural design which would support a person\u27s ability to live independently as they age. Instrumentation (sensors and processing) which is capable of recognizing the actions and behavioral patterns of an individual is key to the effective component design in these areas. Recognition of user activity and the inference of user intention may be used to inform the action plans of support systems and service robotics within the environment. Automated activity recognition involves detection of events in a sensor data stream, conversion to a compact format, and classification as one of a known set of actions. Once classified, an action may be used to elicit a specific response from those systems designed to provide support to the user. It is this response that is the ultimate use of recognized activity. Hence, the activity may be considered as a command to the system. Extending this concept, a set of distinct activities in the form of hand and arm gestures may form the basis of a command interface for human-robot interaction. A gesture-based interface of this type promises an intuitive method for accessing computing and other assistive resources so as to promote rapid adoption by elderly, impaired, or otherwise unskilled users. This thesis includes a thorough survey of relevant work in the area of machine learning for activity and gesture recognition. Previous approaches are compared for their relative benefits and limitations. A novel approach is presented which utilizes user-generated feedback to rate the desirability of a robotic response to gesture. Poorly rated responses are altered so as to elicit improved ratings on subsequent observations. In this way, responses are honed toward increasing effectiveness. A clustering method based on the Growing Neural Gas (GNG) algorithm is used to create a topological map of reference nodes representing input gesture types. It is shown that learning of desired responses to gesture may be accelerated by exploiting well-rewarded actions associated with reference nodes in a local neighborhood of the growing neural gas topology. Significant variation in the user\u27s performance of gestures is interpreted as a new gesture for which the system must learn a desired response. A method for allowing the system to learn new gestures while retaining past training is also proposed and shown to be effective
    • …
    corecore