5 research outputs found

    On Managing Knowledge for MAPE-K Loops in Self-Adaptive Robotics Using a Graph-Based Runtime Model

    Get PDF
    Service robotics involves the design of robots that work in a dynamic and very open environment, usually shared with people. In this scenario, it is very difficult for decision-making processes to be completely closed at design time, and it is necessary to define a certain variability that will be closed at runtime. MAPE-K (Monitor–Analyze–Plan–Execute over a shared Knowledge) loops are a very popular scheme to address this real-time self-adaptation. As stated in their own definition, they include monitoring, analysis, planning, and execution modules, which interact through a knowledge model. As the problems to be solved by the robot can be very complex, it may be necessary for several MAPE loops to coexist simultaneously in the robotic software architecture endowed in the robot. The loops will then need to be coordinated, for which they can use the knowledge model, a representation that will include information about the environment and the robot, but also about the actions being executed. This paper describes the use of a graph-based representation, the Deep State Representation (DSR), as the knowledge component of the MAPE-K scheme applied in robotics. The DSR manages perceptions and actions, and allows for inter- and intra-coordination of MAPE-K loops. The graph is updated at runtime, representing symbolic and geometric information. The scheme has been successfully applied in a retail intralogistics scenario, where a pallet truck robot has to manage roll containers for satisfying requests from human pickers working in the warehousePartial funding for open access charge: Universidad de Málaga. This work has been partially developed within SA3IR (an experiment funded by EU H2020 ESMERA Project under Grant Agreement 780265), the project RTI2018-099522-B-C4X, funded by the Gobierno de España and FEDER funds, and the B1-2021_26 project, funded by the University of Málaga

    Automatic Understanding and Mapping of Regions in Cities Using Google Street View Images

    Get PDF
    The use of semantic representations to achieve place understanding has been widely studied using indoor information. This kind of data can then be used for navigation, localization, and place identification using mobile devices. Nevertheless, applying this approach to outdoor data involves certain non-trivial procedures, such as gathering the information. This problem can be solved by using map APIs which allow images to be taken from the dataset captured to add to the map of a city. In this paper, we seek to leverage such APIs that collect images of city streets to generate a semantic representation of the city, built using a clustering algorithm and semantic descriptors. The main contribution of this work is to provide a new approach to generate a map with semantic information for each area of the city. The proposed method can automatically assign a semantic label for the cluster on the map. This method can be useful in smart cities and autonomous driving approaches due to the categorization of the zones in a city. The results show the robustness of the proposed pipeline and the advantages of using Google Street View images, semantic descriptors, and machine learning algorithms to generate semantic maps of outdoor places. These maps properly encode the zones existing in the selected city and are able to provide new zones between current ones.This work has been supported by the Spanish Grant PID2019-104818RB-I00 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”. José Carlos Rangel and Edmanuel Cruz were supported by the Sistema Nacional de Investigación (SNI) of SENACYT, Panama

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system
    corecore