31 research outputs found
Fast Bayesian People Detection
Abstract Template-based methods have been shown to be effective at solving the problem of tracking specific objects, but their large number of free parameters can make them slow to apply and hard to optimise globally. In this work, we propose a template-based method for tracking people with fixed cameras, which automatically detects the number of people in a frame, is robust to occlusions, and can run at near-realtime frame rates. We demonstrate the effectiveness of the method by comparing it to a state-of-the-art background segmentation algorithm and show its important performance advantage
relating conversational expressiveness to social presence and accpetance of an assistive social robot
Exploring the relationship between social presence, conversational expressiveness, and robot acceptance, we set up an experiment with a robot in an eldercare institution, comparing a more and less social condition. Participants showed more expressiveness with a more social agent and a higher score on expressiveness correlated with higher scores on social presence. Furthermore, scores on social presence correlated with the scores on the intention to use the system in the near future. However, we found no correlation between conversational expressiveness and robot acceptance
Appearance-based Concurrent Map Building and Localization
Appearance-based autonomous robot localization has some advantages over landmarkbased localization as, for instance, the simplicity of the processes applied to the sensor readings. The main drawback of appearance-based localization is that it requires a map including images taken at known positions in the environment where the robot is expected to move. In this paper, we describe a concurrent map-building and localization (CML) system developed within the appearance-base robot localization paradigm. This allow us to combine the good features of appearance-base localization without having to deal with its inconveniences
Vision-Based Localization for Mobile Platforms
In this paper, we describe methods to localize a mobile robot in an indoor environment from visual information. An appearance-based approach is adopted in which the environment is represented by a large set of images from which features are extracted. We extended the appearance based approach with an active vision component, which fits well in our probabilistic framework. We also describe another extension, in which depth information is used next to intensity information. The results of our experiments show that a localization accuracy of less then 50 cm can be achieved even when there are un-modeled changes in the environment or in the lighting conditions
Coordinating Principal Component Analyzers
Mixtures of Principal Component Analyzers can be used to model high dimensional data that lie on or near a low dimensional manifold. By linearly mapping the PCA subspaces to one global low dimensional space, we obtain a `global' low dimensional coordinate system for the data. As shown by Roweis et al., ensuring consistent global low-dimensional coordinates for the data can be expressed as a penalized likelihood optimization problem. We show that a restricted form of the Mixtures of Probabilistic PCA model allows for a more efficient algorithm. Experimental results are provided to illustrate the viability method