DOF-Decoupled Active Sensing: A (more) human approach to localization tasks

Abstract

In the context of scene-model calibration, different forms of sensing may allow to widen the set of environment conditions in which the task can be accomplished. Specifically, in the case of harsh structured intervention scenarios, e.g. seabed or mine sites, the use of cameras may not suffice to perform the calibration due to noise and obstructions. Motivated by such practical problems, our research aims at finding solutions to perform the calibration in different environment with variable initial uncertainty. In particular, we focus on force sensing as it is applicable in highly-soiled environments. The state of the art on touch-based localization provides solutions to low-dimension problems, since the initial uncertainty is bounded by the computational complexity that scales exponentially with the number of considered degrees of freedom. Therefore, previous related works tackled problems with initial uncertainty in the order of 0.5m in translation. Moreover, the question of “where to sense next” remains mostly unsolved, especially for online applications. Driven by the need to find feasible solutions for the decision-making side of the problem and aiming at tackling tasks with high initial uncertainty, a test with thirty human subjects has been performed to observe men and women performing a blind localization task. The subjects had to localize a solid object by exploring a structured environment using a stick, wearing thick gloves, a headset and an eye-cover, so all the senses apart from force sensing were shut down. The experiment showed how all the subjects freely decided to decouple the task into a series of subtasks, each of them focused on a limited number of degrees of freedom. Further to the results obtained in the experiment, a new formulation for localization has been proposed here, named DOF-Decouple Active Sensing. In particular, the global task is decoupled into a series of subtasks aimed at solving only a part of the global uncertainty. This allows to keep the problem tractable for online applications as both the number of considered degrees of freedom and the estimator resolution is increased while the uncertainty is being reduced. Moreover, the decision-making about “where to sense next” is formalized as a greedy POMDP with a reward function that takes into account the expected information together with the cost of motion and the computational effort to process the obtained information. To demonstrate the applicability and the computational benefits of the proposed decoupling scheme, a series of touch-based rectangle localization in 3D has been simulated with initial uncertainty of 2m in translation and 180deg in rotation. A robotic implementation of such application is currently under development. The proposed scheme introduces a high-level decision making step that simplifies the global localization problems into subtasks, then allows to choose which action to do evaluating both benefits and drawbacks. Such formalization is suitable for integrating multiple sensors and making decisions about which one to use. Therefore, the application of such scheme to a multi-sensor system is currently being investigated. In particular, we intend to show the effectiveness of the decision-making among different sensing actions. Moreover, the hierarchical problem decoupling followed in the implemented application is taken as-is from the human experiment. Although, further to the observed results, it is evident that the subjects decided which DOFs to tackle in which subtasks by prioritizing the different scene objects based on their topological relationships. Hence, the possibility of formalizing such decoupling as a constraint optimization using object properties and relative constraints is to be further investigated.Interactive Presentationstatus: accepte

    Similar works

    Full text

    thumbnail-image

    Available Versions