504 research outputs found

    Robot@VirtualHome, an ecosystem of virtual environments and tools for realistic indoor robotic simulation

    Get PDF
    Simulations and synthetic datasets have historically empower the research in different service robotics-related problems, being revamped nowadays with the utilization of rich virtual environments. However, with their use, special attention must be paid so the resulting algorithms are not biased by the synthetic data and can generalize to real world conditions. These aspects are usually compromised when the virtual environments are manually designed. This article presents Robot@VirtualHome, an ecosystem of virtual environments and tools that allows for the management of realistic virtual environments where robotic simulations can be performed. Here “realistic” means that those environments have been designed by mimicking the rooms’ layout and objects appearing in 30 real houses, hence not being influenced by the designer’s knowledge. The provided virtual environments are highly customizable (lighting conditions, textures, objects’ models, etc.), accommodate meta-information about the elements appearing therein (objects’ types, room categories and layouts, etc.), and support the inclusion of virtual service robots and sensors. To illustrate the possibilities of Robot@VirtualHome we show how it has been used to collect a synthetic dataset, and also exemplify how to exploit it to successfully face two service robotics-related problems: semantic mapping and appearance-based localization.This work has been supported by the research projects WISER (DPI2017-84827-R), funded by the Spanish Government and financed by the European Regional Development’s funds (FEDER), ARPEGGIO (PID2020-117057GB-I00), funded by the European H2020 program, by the grant number FPU17/04512 and the UG PHD scholarship pro-gram from the University of Groningen. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal used for this research. We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluste

    Semantic information for robot navigation: a survey

    Get PDF
    There is a growing trend in robotics for implementing behavioural mechanisms based on human psychology, such as the processes associated with thinking. Semantic knowledge has opened new paths in robot navigation, allowing a higher level of abstraction in the representation of information. In contrast with the early years, when navigation relied on geometric navigators that interpreted the environment as a series of accessible areas or later developments that led to the use of graph theory, semantic information has moved robot navigation one step further. This work presents a survey on the concepts, methodologies and techniques that allow including semantic information in robot navigation systems. The techniques involved have to deal with a range of tasks from modelling the environment and building a semantic map, to including methods to learn new concepts and the representation of the knowledge acquired, in many cases through interaction with users. As understanding the environment is essential to achieve high-level navigation, this paper reviews techniques for acquisition of semantic information, paying attention to the two main groups: human-assisted and autonomous techniques. Some state-of-the-art semantic knowledge representations are also studied, including ontologies, cognitive maps and semantic maps. All of this leads to a recent concept, semantic navigation, which integrates the previous topics to generate high-level navigation systems able to deal with real-world complex situationsThe research leading to these results has received funding from HEROITEA: Heterogeneous 480 Intelligent Multi-Robot Team for Assistance of Elderly People (RTI2018-095599-B-C21), funded by Spanish 481 Ministerio de EconomĂ­a y Competitividad. The research leading to this work was also supported project "Robots sociales para estimulacĂłn fĂ­sica, cognitiva y afectiva de mayores"; funded by the Spanish State Research Agency under grant 2019/00428/001. It is also funded by WASP-AI Sweden; and by Spanish project Robotic-Based Well-Being Monitoring and Coaching for Elderly People during Daily Life Activities (RTI2018-095599-A-C22)

    Enhancing semantic segmentation with detection priors and iterated graph cuts for robotics

    Get PDF
    To foster human\u2013robot interaction, autonomous robots need to understand the environment in which they operate. In this context, one of the main challenges is semantic segmentation, together with the recognition of important objects, which can aid robots during exploration, as well as when planning new actions and interacting with the environment. In this study, we extend a multi-view semantic segmentation system based on 3D Entangled Forests (3DEF) by integrating and refining two object detectors, Mask R-CNN and You Only Look Once (YOLO), with Bayesian fusion and iterated graph cuts. The new system takes the best of its components, successfully exploiting both 2D and 3D data. Our experiments show that our approach is competitive with the state-of-the-art and leads to accurate semantic segmentations
    • …
    corecore