191 research outputs found
Personalizing a Service Robot by Learning Human Habits from Behavioral Footprints
For a domestic personal robot, personalized services are as important as predesigned tasks, because the robot needs to adjust the home state based on the operator's habits. An operator's habits are composed of cues, behaviors, and rewards. This article introduces behavioral footprints to describe the operator's behaviors in a house, and applies the inverse reinforcement learning technique to extract the operator's habits, represented by a reward function. We implemented the proposed approach with a mobile robot on indoor temperature adjustment, and compared this approach with a baseline method that recorded all the cues and behaviors of the operator. The result shows that the proposed approach allows the robot to reveal the operator's habits accurately and adjust the environment state accordingly
Direct Visual Servoing Based on Discrete Orthogonal Moments
This paper proposes a new approach to achieve direct visual servoing (DVS)
based on discrete orthogonal moments (DOM). DVS is conducted whereby the
extraction of geometric primitives, matching and tracking steps in the
conventional feature-based visual servoing pipeline can be bypassed. Although
DVS enables highly precise positioning, and suffers from a small convergence
domain and poor robustness, due to the high non-linearity of the cost function
to be minimized and the presence of redundant data between visual features. To
tackle these issues, we propose a generic and augmented framework to take DOM
as visual features into consideration. Through taking Tchebichef, Krawtchouk
and Hahn moments as examples, we not only present the strategies for adaptive
adjusting the parameters and orders of the visual features, but also exhibit
the analytical formulation of the associated interaction matrix. Simulations
demonstrate the robustness and accuracy of our method, as well as the
advantages over the state of the art. The real experiments have also been
performed to validate the effectiveness of our approach
Joint Rigid Registration of Multiple Generalized Point Sets With Anisotropic Positional Uncertainties in Image-Guided Surgery
In medical image analysis (MIA) and computer-assisted surgery (CAS), aligning two multiple point sets (PSs) together is an essential but also a challenging problem. For example, rigidly aligning multiple point sets into one common coordinate frame is a prerequisite for statistical shape modelling (SSM). Accurately aligning the pre-operative space with the intra-operative space in CAS is very crucial to successful interventions. In this article, we formally formulate the multiple generalized point set registration problem (MGPSR) in a probabilistic manner, where both the positional and the normal vectors are used. The six-dimensional vectors consisting of both positional and normal vectors are called as generalized points. In the formulated model, all the generalized PSs to be registered are considered to be the realizations of underlying unknown hybrid mixture models (HMMs). By assuming the independence of the positional and orientational vectors (i.e., the normal vectors), the probability density function (PDF) of an observed generalized point is computed as the product of Gaussian and Fisher distributions. Furthermore, to consider the anisotropic noise in surgical navigation, the positional error is assumed to obey a multi-variate Gaussian distribution. Finally, registering PSs is formulated as a maximum likelihood (ML) problem, and solved under the expectation maximization (EM) technique. By using more enriched information (i.e., the normal vectors), our algorithm is more robust to outliers. By treating all PSs equally, our algorithm does not bias towards any PS. To validate the proposed approach, extensive experiments have been conducted on surface points extracted from CT images of (i) a human femur bone model; (ii) a human pelvis bone model. Results demonstrate our algorithm's high accuracy, robustness to noise and outliers
- …