14 research outputs found

    Architecture of SLAM navigation on the example of Octomap software

    Get PDF
    Three-dimensional models provide a volumetric representation of space which i

    A Proposal for Semantic Map Representation and Evaluation

    Get PDF
    Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset

    Diseño de una arquitectura robótica para mapear un lenguaje de acción a comandos de movimiento de bajo nivel para manipulación hábil

    Get PDF
    This paper gives an overview of a robotic architecture meant for skillful manipulation. This design is meant to close the gap between the high level layer (reasoning and planing layer) and the object model system (physical control layer). This architecture proposes an interface layer that allows, in a meaningful way, to connect atomic tasks with controller inputs. In this paper, we discuss how specific complex tasks can be resolved by this system; we analyze the affordance unit design and, we overview the future challenges in the implemenation of the whole system.Este artículo ofrece una visión general de una arquitectura robótica destinada a la manipulación hábil. Este diseño está destinado a cerrar la brecha entre la capa de alto nivel (capa de razonamiento y planificación) y el sistema de modelo de objetos (capa de control físico). Esta arquitectura propone una capa de interfaz que permite, de manera significativa, conectar tareas básicas con el controlador. En este artículo, discutimos cómo este sistema puede resolver tareas complejas específicas; analizamos el diseño de la unidad de accesibilidad y presentamos una visión general de los desafíos futuros en la implementación de todo el sistema.Universidad de Costa Rica/[322-B6-279]/UCR/Costa RicaUCR::Vicerrectoría de Investigación::Unidades de Investigación::Ingeniería::Instituto Investigaciones en Ingeniería (INII)UCR::Vicerrectoría de Docencia::Ingeniería::Facultad de Ingeniería::Escuela de Ingeniería EléctricaUCR::Vicerrectoría de Investigación::Sistema de Estudios de Posgrado::Ingeniería::Maestría Académica en Ingeniería Eléctric

    Artificial Cognition for Social Human-Robot Interaction: An Implementation

    Get PDF
    © 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system

    EFFECTIVE NAVIGATION AND MAPPING OF A CLUTTERED ENVIRONMENT USING A MOBILE ROBOT

    Get PDF
    Today, the as-is three-dimensional point cloud acquisition process for understanding scenes of interest, monitoring construction progress, and detecting safety hazards uses a laser scanning system mounted on mobile robots, which enables it faster and more automated, but there is still room for improvement. The main disadvantage of data collection using laser scanners is that point cloud data is only collected in a scanner’s line of sight, so regions in three-dimensional space that are occluded by objects are not observable. To solve this problem and obtain a complete reconstruction of sites without information loss, scans must be taken from multiple viewpoints. This thesis describes how such a solution can be integrated into a fully autonomous mobile robot capable of generating a high-resolution three-dimensional point cloud of a cluttered and unknown environment without a prior map. First, the mobile platform estimates unevenness of terrain and surrounding environment. Second, it finds the occluded region in the currently built map and determines the effective next scan location. Then, it moves to that location by using grid-based path planner and unevenness estimation results. Finally, it performs the high-resolution scanning that area to fill out the point cloud map. This process repeats until the designated scan region filled up with scanned point cloud. The mobile platform also keeps scanning for navigation and obstacle avoidance purposes, calculates its relative location, and builds the surrounding map while moving and scanning, a process known as simultaneous localization and mapping. The proposed approaches and the system were tested and validated in an outdoor construction site and a simulated disaster environment with promising results.Ph.D

    Active Vision for Scene Understanding

    Get PDF
    Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot\u27s view in order to explore interaction possibilities of the scene

    An ontology-based approach towards coupling task and path planning for the simulation of manipulation tasks

    Get PDF
    This work deals with the simulation and the validation of complex manipulation tasks under strong geometric constraints in virtual environments. The targeted applications relate to the industry 4.0 framework; as up-to-date products are more and more integrated and the economic competition increases, industrial companies express the need to validate, from design stage on, not only the static CAD models of their products but also the tasks (e.g., assembly or maintenance) related to their Product Lifecycle Management (PLM). The scientific community looked at this issue from two points of view: - Task planning decomposes a manipulation task to be realized into a sequence of primitive actions (i.e., a task plan) - Path planning computes collision-free trajectories, notably for the manipulated objects. It traditionally uses purely geometric data, which leads to classical limitations (possible high computational processing times, low relevance of the proposed trajectory concerning the task to be performed, or failure); recent works have shown the interest of using higher abstraction level data. Joint task and path planning approaches found in the literature usually perform a classical task planning step, and then check out the feasibility of path planning requests associated with the primitive actions of this task plan. The link between task and path planning has to be improved, notably because of the lack of loopback between the path planning level and the task planning level: - The path planning information used to question the task plan is usually limited to the motion feasibility where richer information such as the relevance or the complexity of the proposed path would be needed; - path planning queries traditionally use purely geometric data and/or “blind” path planning methods (e.g., RRT), and no task-related information is used at the path planning level Our work focuses on using task level information at the path planning level. The path planning algorithm considered is RRT; we chose such a probabilistic algorithm because we consider path planning for the simulation and the validation of complex tasks under strong geometric constraints. We propose an ontology-based approach to use task level information to specify path planning queries for the primitive actions of a task plan. First, we propose an ontology to conceptualize the knowledge about the 3D environment in which the simulated task takes place. The environment where the simulated task takes place is considered as a closed part of 3D Cartesian space cluttered with mobile/fixed obstacles (considered as rigid bodies). It is represented by a digital model relying on a multilayer architecture involving semantic, topologic and geometric data. The originality of the proposed ontology lies in the fact that it conceptualizes heterogeneous knowledge about both the obstacles and the free space models. Second, we exploit this ontology to automatically generate a path planning query associated to each given primitive action of a task plan. Through a reasoning process involving the primitive actions instantiated in the ontology, we are able to infer the start and the goal configurations, as well as task-related geometric constraints. Finally, a multi-level path planner is called to generate the corresponding trajectory. The contributions of this work have been validated by full simulation of several manipulation tasks under strong geometric constraints. The results obtained demonstrate that using task-related information allows better control on the RRT path planning algorithm involved to check the motion feasibility for the primitive actions of a task plan, leading to lower computational time and more relevant trajectories for primitive actions
    corecore