25 research outputs found

    Multi-robot collaboration functionalities for robot software development framework TeMoto

    Get PDF
    Robots enable us to operate in hazardous or otherwise unvisitable environments such as mines, fires and radioactive environments. TeMoto, which is built upon the Robotics Operating System (ROS), makes it easier to develop scalable, manageable and reliable software for robotics systems. TeMoto needs functionalities to represent the environment in order to support development of robot systems that are aware of their surroundings. The aim of this work is to plan and design a framework for working with environment models, enable robots to synchronize the environment data, create an environment model to match the framework and test the system in a real world scenario. This was achieved by implementing data structures representing objects and rooms/spaces in the, designing an abstract interface to work with the data structures and implementing the interface to create a corresponding environment model. The work resulted in a functional system and infrastructure, which allows sharing semantic and topological data between robots, which was demonstrated in a trash collecting mission featuring a heterogeneous multi-robot system. The implemented framework lays a foundation for use of environment models in TeMoto, which allows designing robot systems that interact with the world in a meaningful way.Eesti keeles: Robotid võimaldavad töötada eluohtlikes keskkondades või muul moel ligipääsmatutel aladel nagu näiteks kaevandustes, tulekahjude kustutamisel ja radioaktiivsetes keskkondades. Hõlbustamaks inimene-robot ja robot-robot koostöösüsteemide tarkvaraarendust on loodud robotite operatsioonisüsteemil (ROS) põhinev tarkvararaamistik TeMoto. TeMotol on vaja keskkonna esitamise funktsionaalsusi, et oleks võimalik arendada robotisüsteeme, mis on keskkonnast teadlikud. Töö eesmärgiks oli kavandada ja luua TeMoto arhitektuuris raamistik keskkonnamudelitega töötamiseks, luua funktsionaalsus keskkonnamudelite sünkroniseerimiseks TeMoto instantside vahel, pakkuda keskkonnamudel ja testida süsteemi reaalses stsenaariumis. Töö tulemusena valmis terviklik süsteem ja infrastruktuur, millega saab edukalt jagada semantilist ja topoloogilist informatsiooni mitme roboti vahel ja seda demonstreeriti heterogeense mitme roboti süsteemiga läbi viidud otsingumissiooni näitel. Implementeeritud TeMoto keskkonnamudeli raamistik paneb aluse keskkonnamudelite kasutusele TeMoto arhitektuuris, mis võimaldab TeMoto abil arendada keskkonnaga mõtestatult tegutsevaid robotsüsteeme

    Ontology based autonomous robot task processing framework

    Get PDF
    IntroductionIn recent years, the perceptual capabilities of robots have been significantly enhanced. However, the task execution of the robots still lacks adaptive capabilities in unstructured and dynamic environments.MethodsIn this paper, we propose an ontology based autonomous robot task processing framework (ARTProF), to improve the robot's adaptability within unstructured and dynamic environments. ARTProF unifies ontological knowledge representation, reasoning, and autonomous task planning and execution into a single framework. The interface between the knowledge base and neural network-based object detection is first introduced in ARTProF to improve the robot's perception capabilities. A knowledge-driven manipulation operator based on Robot Operating System (ROS) is then designed to facilitate the interaction between the knowledge base and the robot's primitive actions. Additionally, an operation similarity model is proposed to endow the robot with the ability to generalize to novel objects. Finally, a dynamic task planning algorithm, leveraging ontological knowledge, equips the robot with adaptability to execute tasks in unstructured and dynamic environments.ResultsExperimental results on real-world scenarios and simulations demonstrate the effectiveness and efficiency of the proposed ARTProF framework.DiscussionIn future work, we will focus on refining the ARTProF framework by integrating neurosymbolic inference

    Towards Lifelong Object Learning by Integrating Situated Robot Perception and Semantic Web Mining

    Get PDF
    International audienceAutonomous robots that are to assist humans in their daily lives are required, among other things, to recognize and understand the meaning of task-related objects. However, given an open-ended set of tasks, the set of everyday objects that robots will encounter during their lifetime is not foreseeable. That is, robots have to learn and extend their knowledge about previously unknown objects on-the-job. Our approach automatically acquires parts of this knowledge (e.g., the class of an object and its typical location) in form of ranked hypotheses from the Semantic Web using contextual information extracted from observations and experiences made by robots. Thus, by integrating situated robot perception and Semantic Web mining, robots can continuously extend their object knowledge beyond perceptual models which allows them to reason about task-related objects , e.g., when searching for them, robots can infer the most likely object locations. An evaluation of the integrated system on long-term data from real office observations, demonstrates that generated hypotheses can effectively constrain the meaning of objects. Hence, we believe that the proposed system can be an essential component in a lifelong learning framework which acquires knowledge about objects from real world observations

    Design and implementation of a system for mutual knowledge among cognition-enabled robots

    Get PDF
    The progressive integration of robots in everyday activities is raising the need for autonomous machines to reason about their actions, the environment and the objects around them. The KnowRob knowledge processing system is specifically designed to bring these competences to autonomous robots, helping them to acquire, reason about and store knowledge. This work presents a framework for enhancing the KnowRob system with mutual knowledge acquisition and reasoning among knowledge-enabled robot

    Envisioning the qualitative effects of robot manipulation actions using simulation-based projections

    Get PDF
    Autonomous robots that are to perform complex everyday tasks such as making pancakes have to understand how the effects of an action depend on the way the action is executed. Within Artificial Intelligence, classical planning reasons about whether actions are executable, but makes the assumption that the actions will succeed (with some probability). In this work, we have designed, implemented, and analyzed a framework that allows us to envision the physical effects of robot manipulation actions. We consider envisioning to be a qualitative reasoning method that reasons about actions and their effects based on simulation-based projections. Thereby it allows a robot to infer what could happen when it performs a task in a certain way. This is achieved by translating a qualitative physics problem into a parameterized simulation problem; performing a detailed physics-based simulation of a robot plan; logging the state evolution into appropriate data structures; and then translating these sub-symbolic data structures into interval-based first-order symbolic, qualitative representations, called timelines. The result of the envisioning is a set of detailed narratives represented by timelines which are then used to infer answers to qualitative reasoning problems. By envisioning the outcome of actions before committing to them, a robot is able to reason about physical phenomena and can therefore prevent itself from ending up in unwanted situations. Using this approach, robots can perform manipulation tasks more efficiently, robustly, and flexibly, and they can even successfully accomplish previously unknown variations of tasks
    corecore