3,849 research outputs found
Semantic labeling of places using information extracted from laser and vision sensor data
Indoor environments can typically be divided into places with different functionalities like corridors, kitchens,
offices, or seminar rooms. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating the interaction withhumans. As an example, natural language terms like corridor or room can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we firrst propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from range data and vision into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. Secondly,
we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation procedure. We finally show how to apply associative Markov networks (AMNs) together with AdaBoost for classifying complete geometric maps. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor
environments
Supervised semantic labeling of places using information extracted from sensor data
Indoor environments can typically be divided into places with different functionalities like corridors, rooms or doorways. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating interaction with humans. As an example, natural language terms like “corridor” or “room” can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we first propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from sensor range data into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. In this case we additionally use as features objects extracted from images. Secondly, we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation method. Alternatively, we apply associative Markov networks to classify geometric maps and compare the results with a relaxation approach. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor environments
Learning to plan with uncertain topological maps
We train an agent to navigate in 3D environments using a hierarchical
strategy including a high-level graph based planner and a local policy. Our
main contribution is a data driven learning based approach for planning under
uncertainty in topological maps, requiring an estimate of shortest paths in
valued graphs with a probabilistic structure. Whereas classical symbolic
algorithms achieve optimal results on noise-less topologies, or optimal results
in a probabilistic sense on graphs with probabilistic structure, we aim to show
that machine learning can overcome missing information in the graph by taking
into account rich high-dimensional node features, for instance visual
information available at each location of the map. Compared to purely learned
neural white box algorithms, we structure our neural model with an inductive
bias for dynamic programming based shortest path algorithms, and we show that a
particular parameterization of our neural model corresponds to the Bellman-Ford
algorithm. By performing an empirical analysis of our method in simulated
photo-realistic 3D environments, we demonstrate that the inclusion of visual
features in the learned neural planner outperforms classical symbolic solutions
for graph based planning.Comment: ECCV 202
Unsupervised Temporospatial Neural Architecture for Sensorimotor Map Learning
Peer reviewedPostprin
Wheelchair driver assistance and intention prediction using POMDPs
Electric wheelchairs give otherwise immobile people the free-dom of movement, they significantly increase independence and dramatically increase quality of life. However the physical control systems of such wheelchair can be prohibitive for some users; for example, people with severe tremors. Several assisted wheelchair platforms have been developed in the past to assist such users. Algorithms that assist specific behaviors such as door - passing, follow - corridor, or avoid - obstacles have been successful. Recent research has seen a move towards systems that predict the users intentions, based on the users input. These predictions have been typically limited to locations immediately surrounding the wheelchair. This paper presents a new assisted wheelchair driving system with large scale intelligent intention recognition based on POMDPs (Partially Observable Markov Decision Processes). The systems acts as an intelligent agent/decision-maker, it relies on minimal user input; to predict the users intention and then autonomously drives the user to his destination. The prediction is constantly being updated as new user input is received allowing for true user/system integration. This shifts the users focus from fine motor-skilled control to coarse control intended to convey intention. © 2007 IEEE
- …