5 research outputs found

    Interactive semantic mapping: Experimental evaluation

    Get PDF
    Robots that are launched in the consumer market need to provide more effective human robot interaction, and, in particular, spoken language interfaces. However, in order to support the execution of high level commands as they are specified in natural language, a semantic map is required. Such a map is a representation that enables the robot to ground the commands into the actual places and objects located in the environment. In this paper, we present the experimental evaluation of a system specifically designed to build semantically rich maps, through the interaction with the user. The results of the experiments not only provide the basis for a discussion of the features of the proposed approach, but also highlight the manifold issues that arise in the evaluation of semantic mapping

    Commande collaborative d'un fauteuil roulant dans un environnement partiellement connu

    Get PDF
    RÉSUMÉ Depuis des années, des chercheurs et des étudiants aux cycles supérieurs de trois universités au Québec ont travaillé au projet de réalisation d’un prototype d’un Fauteuil Roulant Motorisé Intelligent (FRMI). C’est un robot mobile équipé de capteurs, de caméras et de modules de contrôle permettant à la chaise roulante d’accomplir de nombreuses tâches autonomes avec assistance aux utilisateurs. Malgré le bon état d’avancement de ces travaux de recherche, il nous reste plusieurs aspects à améliorer et perfectionner. En effet, le fauteuil possède des modules de cartographie et localisation simultanées (Simultaneous Localization And Mapping (SLAM)), de navigation complètement autonome, et de contrôle collaboratif qui assiste les usagers en combinant les commandes de téléopération et autonomes. Pour utiliser le contrôle collaboratif, il faut déterminer une ou plusieurs destinations potentielles dans la carte de l’environnement. Avec les commandes données par l’utilisateur, le système estime son intention et l’aide à atteindre la destination. Pour les personnes âgées, les utilisateurs principaux des fauteuils roulants, il n’est pas concevable de naviguer pour créer la carte de l’environnement, de la lire et d’y placer des destinations potentielles. Pour simplifier la tâche de l’utilisateur, nous devons localiser le véhicule dans une carte connue a priori et utiliser cette carte dans un algorithme de SLAM et inférer automatiquement l’intention de l’utilisateur au cours de la navigation. Le but de ce projet est alors de compléter l’environnement de travail autour du module de contrôle collaboratif en améliorant le module de cartographie et de créer un module qui détermine des destinations potentielles dans la carte afin de les placer automatiquement dans l’environnement de navigation de la chaise. La réalisation de ces objectifs est accomplie par une étude du module de contrôle collaboratif, l’insertion de la carte connue a priori dans un algorithme de SLAM et la détection automatique de points d’intérêt dans cette carte. La première partie consiste à comparer des algorithmes de SLAM existants pour choisir le plus approprié à notre application. D’un autre côté, une technique de construction des données de SLAM à partir d’une carte de l’environnement est implémentée et testée grâce à un algorithme qui crée une carte de type SLAM à partir d’une génération de trajectoires virtuelles.----------ABSTRACT Researchers and students in three universities of Quebec, École Polytechnique de Montréal, University de Montréal and McGill University have been working on the Intelligent Powered Wheelchair (IPW) for many years. A wheelchair which has the ability to do some daily tasks with the help of sensors and computers was developed. It can navigate autonomously or create a map of the environment by itself. Therefore, it can be considered as a mobile robot. Despite the fact that the IPW is in a very advanced state and was developed for years, improvements can still be made. Indeed, the chair possessed a SLAM module to create a map and localize the chair in the environment, a navigation module which can navigate autonomously inside the map or assists the user to control the chair without ignoring their command by combining the autonomous navigation with the manual navigation. It is called shared autonomy. In order to use the shared autonomy, first of all, we need to determine the possible destination on the map. Then, the controller will analyse the manoeuver of the user and after that, it will guess the user’s intention before leading the robot to the destination. For the main user of the wheelchair, the elderly, the utilization of such function is not obvious. In order to simplify the user’s works, we need to locate the robot in the map and then define all the destinations automatically. The project completes the workflow of the shared autonomy control by improving the SLAM algorithm of the IPW and by creating a module which analyses the map and determines all the destinations. In this thesis, we analyse many methods of SLAM available before deciding to implement one in our chair. Second, we will modify the design of the algorithm so we can load a map before navigating by creating a virtual trajectory which navigate the existing map. Finally, we create a module that can find all the destinations in the map and manage to put it in the SLAM all automatically. We want to improve the user experience by simplifying its tasks. Our works can support the shared autonomy algorithm to reach as many patients as possible, even the one who are not used to modern technology

    Spatial representation for planning and executing robot behaviors in complex environments

    Get PDF
    Robots are already improving our well-being and productivity in different applications such as industry, health-care and indoor service applications. However, we are still far from developing (and releasing) a fully functional robotic agent that can autonomously survive in tasks that require human-level cognitive capabilities. Robotic systems on the market, in fact, are designed to address specific applications, and can only run pre-defined behaviors to robustly repeat few tasks (e.g., assembling objects parts, vacuum cleaning). They internal representation of the world is usually constrained to the task they are performing, and does not allows for generalization to other scenarios. Unfortunately, such a paradigm only apply to a very limited set of domains, where the environment can be assumed to be static, and its dynamics can be handled before deployment. Additionally, robots configured in this way will eventually fail if their "handcrafted'' representation of the environment does not match the external world. Hence, to enable more sophisticated cognitive skills, we investigate how to design robots to properly represent the environment and behave accordingly. To this end, we formalize a representation of the environment that enhances the robot spatial knowledge to explicitly include a representation of its own actions. Spatial knowledge constitutes the core of the robot understanding of the environment, however it is not sufficient to represent what the robot is capable to do in it. To overcome such a limitation, we formalize SK4R, a spatial knowledge representation for robots which enhances spatial knowledge with a novel and "functional" point of view that explicitly models robot actions. To this end, we exploit the concept of affordances, introduced to express opportunities (actions) that objects offer to an agent. To encode affordances within SK4R, we define the "affordance semantics" of actions that is used to annotate an environment, and to represent to which extent robot actions support goal-oriented behaviors. We demonstrate the benefits of a functional representation of the environment in multiple robotic scenarios that traverse and contribute different research topics relating to: robot knowledge representations, social robotics, multi-robot systems and robot learning and planning. We show how a domain-specific representation, that explicitly encodes affordance semantics, provides the robot with a more concrete understanding of the environment and of the effects that its actions have on it. The goal of our work is to design an agent that will no longer execute an action, because of mere pre-defined routine, rather, it will execute an actions because it "knows'' that the resulting state leads one step closer to success in its task

    Interactive generation and learning of semantic-driven robot behaviors

    Get PDF
    The generation of adaptive and reflexive behavior is a challenging task in artificial intelligence and robotics. In this thesis, we develop a framework for knowledge representation, acquisition, and behavior generation that explicitly incorporates semantics, adaptive reasoning and knowledge revision. By using our model, semantic information can be exploited by traditional planning and decision making frameworks to generate empirically effective and adaptive robot behaviors, as well as to enable complex but natural human-robot interactions. In our work, we introduce a model of semantic mapping, we connect it with the notion of affordances, and we use those concepts to develop semantic-driven algorithms for knowledge acquisition, update, learning and robot behavior generation. In particular, we apply such models within existing planning and decision making frameworks to achieve semantic-driven and adaptive robot behaviors in a generic environment. On the one hand, this work generalizes existing semantic mapping models and extends them to include the notion of affordances. On the other hand, this work integrates semantic information within well-defined long-term planning and situated action frameworks to effectively generate adaptive robot behaviors. We validate our approach by evaluating it on a number of problems and robot tasks. In particular, we consider service robots deployed in interactive and social domains, such as offices and domestic environments. To this end, we also develop prototype applications that are useful for evaluation purposes

    Automatic Extraction of Structural Representations of Environments

    No full text
    Robots need a suitable representation of the surrounding world to operate in a structured but dynamic environment. State-of-the-art approaches usually rely on a combination of metric and topological maps and require an expert to provide the knowledge to the robot in a suitable format. Therefore, additional symbolic knowledge cannot be easily added to the representation in an incremental manner. This work deals with the problem of effectively binding together the high-level semantic information with the low-level knowledge represented in the metric map by introducing an intermediate grid-based representation. In order to demonstrate its effectiveness, the proposed approach has been experimentally validated on different kinds of environments
    corecore