10 research outputs found

    On Managing Knowledge for MAPE-K Loops in Self-Adaptive Robotics Using a Graph-Based Runtime Model

    Get PDF
    Service robotics involves the design of robots that work in a dynamic and very open environment, usually shared with people. In this scenario, it is very difficult for decision-making processes to be completely closed at design time, and it is necessary to define a certain variability that will be closed at runtime. MAPE-K (Monitor–Analyze–Plan–Execute over a shared Knowledge) loops are a very popular scheme to address this real-time self-adaptation. As stated in their own definition, they include monitoring, analysis, planning, and execution modules, which interact through a knowledge model. As the problems to be solved by the robot can be very complex, it may be necessary for several MAPE loops to coexist simultaneously in the robotic software architecture endowed in the robot. The loops will then need to be coordinated, for which they can use the knowledge model, a representation that will include information about the environment and the robot, but also about the actions being executed. This paper describes the use of a graph-based representation, the Deep State Representation (DSR), as the knowledge component of the MAPE-K scheme applied in robotics. The DSR manages perceptions and actions, and allows for inter- and intra-coordination of MAPE-K loops. The graph is updated at runtime, representing symbolic and geometric information. The scheme has been successfully applied in a retail intralogistics scenario, where a pallet truck robot has to manage roll containers for satisfying requests from human pickers working in the warehousePartial funding for open access charge: Universidad de Málaga. This work has been partially developed within SA3IR (an experiment funded by EU H2020 ESMERA Project under Grant Agreement 780265), the project RTI2018-099522-B-C4X, funded by the Gobierno de España and FEDER funds, and the B1-2021_26 project, funded by the University of Málaga

    4CNet: A Confidence-Aware, Contrastive, Conditional, Consistency Model for Robot Map Prediction in Multi-Robot Environments

    Full text link
    Mobile robots in unknown cluttered environments with irregularly shaped obstacles often face sensing, energy, and communication challenges which directly affect their ability to explore these environments. In this paper, we introduce a novel deep learning method, Confidence-Aware Contrastive Conditional Consistency Model (4CNet), for mobile robot map prediction during resource-limited exploration in multi-robot environments. 4CNet uniquely incorporates: 1) a conditional consistency model for map prediction in irregularly shaped unknown regions, 2) a contrastive map-trajectory pretraining framework for a trajectory encoder that extracts spatial information from the trajectories of nearby robots during map prediction, and 3) a confidence network to measure the uncertainty of map prediction for effective exploration under resource constraints. We incorporate 4CNet within our proposed robot exploration with map prediction architecture, 4CNet-E. We then conduct extensive comparison studies with 4CNet-E and state-of-the-art heuristic and learning methods to investigate both map prediction and exploration performance in environments consisting of uneven terrain and irregularly shaped obstacles. Results showed that 4CNet-E obtained statistically significant higher prediction accuracy and area coverage with varying environment sizes, number of robots, energy budgets, and communication limitations. Real-world mobile robot experiments were performed and validated the feasibility and generalizability of 4CNet-E for mobile robot map prediction and exploration.Comment: 14 pages, 10 figure

    Predicting semantic map representations from images using pyramid occupancy networks

    No full text
    Autonomous vehicles commonly rely on highly detailed birds-eye-view maps of their environment, which capture both static elements of the scene such as road layout as well as dynamic elements such as other cars and pedestrians. Generating these map representations on the fly is a complex multi-stage process which incorporates many important vision-based elements, including ground plane estimation, road segmentation and 3D object detection. In this work we present a simple, unified approach for estimating these map representations directly from monocular images using a single end-to-end deep learning architecture. For the maps themselves we adopt a semantic Bayesian occupancy grid framework, allowing us to trivially accumulate information over multiple cameras and timesteps. We demonstrate the effectiveness of our approach by evaluating against several challenging baselines on the NuScenes and Argoverse datasets, and show that we are able to achieve a relative improvement of 9.1% and 22.3% respectively compared to the best-performing existing method
    corecore