63 research outputs found

    The STRANDS project: long-term autonomy in everyday environments

    Get PDF
    Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance

    The STRANDS project: long-term autonomy in everyday environments

    Get PDF
    Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance

    Transfer Learning for Improving Model Predictions in Highly Configurable Software

    Full text link
    Modern software systems are built to be used in dynamic environments using configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost. We define a cost model that transform the traditional view of model learning into a multi-objective problem that not only takes into account model accuracy but also measurements effort as well. We evaluate our cost-aware transfer learning solution using real-world configurable software including (i) a robotic system, (ii) 3 different stream processing applications, and (iii) a NoSQL database system. The experimental results demonstrate that our approach can achieve (a) a high prediction accuracy, as well as (b) a high model reliability.Comment: To be published in the proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'17

    Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of Demonstrations for Social Navigation

    Full text link
    Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a 'socially compliant' manner in the presence of other intelligent agents such as humans. With the emergence of autonomously navigating mobile robots in human populated environments (e.g., domestic service robots in homes and restaurants and food delivery robots on public sidewalks), incorporating socially compliant navigation behaviors on these robots becomes critical to ensuring safe and comfortable human robot coexistence. To address this challenge, imitation learning is a promising framework, since it is easier for humans to demonstrate the task of social navigation rather than to formulate reward functions that accurately capture the complex multi objective setting of social navigation. The use of imitation learning and inverse reinforcement learning to social navigation for mobile robots, however, is currently hindered by a lack of large scale datasets that capture socially compliant robot navigation demonstrations in the wild. To fill this gap, we introduce Socially CompliAnt Navigation Dataset (SCAND) a large scale, first person view dataset of socially compliant navigation demonstrations. Our dataset contains 8.7 hours, 138 trajectories, 25 miles of socially compliant, human teleoperated driving demonstrations that comprises multi modal data streams including 3D lidar, joystick commands, odometry, visual and inertial information, collected on two morphologically different mobile robots a Boston Dynamics Spot and a Clearpath Jackal by four different human demonstrators in both indoor and outdoor environments. We additionally perform preliminary analysis and validation through real world robot experiments and show that navigation policies learned by imitation learning on SCAND generate socially compliant behavior

    An Integrated Control Framework for Long-Term Autonomy in Mobile Service Robots

    Get PDF

    Multi-Sensor Mobile Robot Localization For Diverse Environments

    Get PDF
    Mobile robot localization with different sensors and algorithms is a widely studied problem, and there have been many approaches proposed, with considerable degrees of success. However, every sensor and algorithm has limitations, due to which we believe no single localization algorithm can be “perfect,” or universally applicable to all situations. Laser rangefinders are commonly used for localization, and state-of-theart algorithms are capable of achieving sub-centimeter accuracy in environments with features observable by laser rangefinders. Unfortunately, in large scale environments, there are bound to be areas devoid of features visible by a laser rangefinder, like open atria or corridors with glass walls. In such situations, the error in localization estimates using laser rangefinders could grow in an unbounded manner. Localization algorithms that use depth cameras, like the Microsoft Kinect sensor, have similar characteristics. WiFi signal strength based algorithms, on the other hand, are applicable anywhere there is dense WiFi coverage, and have bounded errors. Although the minimum error of WiFi based localization may be greater than that of laser rangefinder or depth camera based localization, the maximum error of WiFi based localization is bounded and less than that of the other algorithms. Hence, in our work, we analyze the strengths of localization using all three sensors - using a laser rangefinder, a depth camera, and using WiFi. We identify sensors that are most accurate at localization for different locations on the map. The mobile robot could then, for example, rely on WiFi localization more in open areas or areas with glass walls, and laser rangefinder and depth camera based localization in corridor and office environments

    Indoor Localization and Mapping Using Deep Learning Networks

    Get PDF
    Over the past several decades, robots have been used extensively in environments that pose high risk to human operators and in jobs that are repetitive and monotonous. In recent years, robot autonomy has been exploited to extend their use in several non-trivial tasks such as space exploration, underwater exploration, and investigating hazardous environments. Such tasks require robots to function in unstructured environments that can change dynamically. Successful use of robots in these tasks requires them to be able to determine their precise location, obtain maps and other information about their environment, navigate autonomously, and operate intelligently in the unknown environment. The process of determining the location of the robot and generating a map of its environment has been termed in the literature as Simultaneous Localization and Mapping (SLAM). Light Detection and Ranging (LiDAR), Sound Navigation and Ranging (SONAR) sensors, and depth cameras are typically used to generate a representation of the environment during the SLAM process. However, the real-time localization and generation of map information are still challenging tasks. Therefore, there is a need for techniques to speed up the approximate localization and mapping process while using fewer computational resources. This thesis presents an alternative method based on deep learning and computer vision algorithms for generating approximate localization information for mobile robots. This approach has been investigated to obtain approximate localization information captured by monocular cameras. Approximate localization can subsequently be used to develop coarse maps where a priori information is not available. Experiments were conducted to verify the ability of the proposed technique to determine the approximate location of the robot. The approximate location of the robot was qualitatively denoted in terms of its location in a building, a floor of the building, and interior corridors. ArUco markers were used to determine the quantitative location of the robot. The use of this approximate location of the robot in determining the location of key features in the vicinity of the robot was also studied. The results of the research reported in this thesis demonstrate that low cost, low resolution techniques can be used in conjunction with deep learning techniques to obtain approximate localization of an autonomous robot. Further such approximate information can be used to determine coarse position information of key features in the vicinity. It is anticipated that this approach can be subsequently extended to develop low-resolution maps of the environment that are suitable for autonomous navigation of robots

    Route Planning for Long-Term Robotics Missions

    Get PDF
    Many future robotic applications such as the operation in large uncertain environment depend on a more autonomous robot. The robotics long term autonomy presents challenges on how to plan and schedule goal locations across multiple days of mission duration. This is an NP-hard problem that is infeasible to solve for an optimal solution due to the large number of vertices to visit. In some cases the robot hardware constraints also adds the requirement to return to a charging station multiple times in a long term mission. The uncertainties in the robot model and environment require the robot planner to account for them beforehand or to adapt and improve its plan during runtime. The problem to be solved in this work is how to plan multiple day routes for a robot where all predefined locations must be visited only a single time and at each route the robot must start and return to the same initial position while respecting the daily maximum operation time constraint. The proposed solution uses problem definitions from the delivery industry and compares various metaheuristic based techniques for planning and scheduling the multiple day routes for a robotic mission. Therefore the problem of planning multiple day routes for a robot is modeled as a time constrained Vehicle Routing Problem where the robot daily plan is limited by how long the robot with a full charge can operate. The costs are modeled as the time a robot takes to move among locations considering robot and environment characteristics. The solution for this method is obtained in a two step process where a greedy initial solution is generated and then a local search is performed using meta-heuristic based methods. A custom time window formulation with respect to the theoretical maximum daily route is presented to add human expert input, priorities or expiration time to the planned routes allowing the planner to be flexible to various robotic applications. This thesis also proposes an intermediary mission control layer, that connects the daily route plan to the robot navigation layer. The goal of the Mission Control is to monitor the robot operation, continuously improve its route and adapt to unexpected events by dropping waypoints according to some defined penalties. This is an iterative process where optimization is performed locally in real time as the robot traverse its goals and offline at the end of each day with the remaining vertices. The performance of the various meta-heuristic and how optimization improves over time are analysed in several robotic route planning and scheduling scenarios. Two robotic simulation environments were built to demonstrate practical application of these methods. An unmanned ground vehicle operated fully autonomously using the presented methods in a simulated underground stone mine environment where the goal is to inspect the pillars for structural failures and a farm environment where the goal is to pollinate flowers with an attached robotic arm. All the optimization methods tested presented significant improvement in the total route costs compared to the initial Path-Cheapest-Arc solution. However the Guided Local Search presented a smaller standard deviation among the methods in most situations. The time-windows allowed for a seamless integration with an expert human input and the mission control layer, forced the robot to operate within the mission constraints by dynamically choosing the routes and the necessity of dropping some of the vertices
    corecore