29 research outputs found
Pre-grasp planning for time-efficient and robust mobile manipulation
Inden for mobil manipulation (MM) udføres navigations- og manipulationshandlinger ofte sekventielt. Tidseffektiviteten af MM kan forbedres ved at planlægge manipulationshandlinger, så som greb, samtidig med at robotten udfører navigationshandlingerne. Dette kaldes "pre-grasp" planlægning. Pre-grasp planlægning kræver dog nøjagtige 6D-objektpositioner og rotationer ("poses"), som normalt kun er tilgængelige, når robotten er tæt på og har et klart udsyn til objekterne. Desuden kan pre-grasp planlægning ud fra usikre poses føre til fejl. Denne afhandling undersøger, hvordan man kan estimere pålidelige poses til pre-grasp planlægning samt tilknyttede usikkerheder mens robotten nærmer sig objekterne for at gribe. Derudover undersøges det hvordan man kan benytte disse pose-estimater til at udføre pre-grasp planlægning og træffe informerede beslutninger for at forbedre tidseffektiviteten og robustheden ved mobil manipulation.Den første del af denne afhandling fokuserer på at forbedre estimaterne af objekteternes poses, mens robotten nærmer sig objekterne for at gribe dem. Vi udvikler et "multi-view 6D pose distribution tracking framework", der estimerer 6D-poses sammen med de tilknyttede usikkerheder. Frameworket kompenserer for robotkameraets begrænsede udsyn, mens robotten nærmer sig objekterne, ved at inkorporere yderligere billeder fra stationære eksterne kameraer i miljøet. Den anden del af denne afhandling fokuserer på at træffe informerede beslutninger for at reducere risikoen for fejl på grund af usikre estimater af objekternes poses. Vi udvikler et probabilistisk inferens-framework til at afgøre, om usikkerheden i estimatet af objektets pose er tilstrækkelig til vellykket udførelse af en given handling. Frameworket tager højde for både den estimerede usikkerhed i objektets pose og den acceptable usikkerhed for en vellykket gennemførelse af robothandlingen til at bestemme sandsynligheden for succes. Den sidste del af denne afhandling fokuserer på pre-grasp planlægning til at forbedre tidseffektiviteten ved mobil manipulation med online-planlægning. Vi udvikler læringsbaserede metoder til pre-grasp planlægning, der udnytter den iboende hierarkiske struktur i pre-grasp planlægningsopgaver til at forbedre "sample"-effektiviteten under læring. Først lærer vi planlægningen af den bevægelse som robotten benytter til at nærme sig objektet før der gribes. Derefter lærer vi planlægningen af gribesekvensen for objektet og robotbasens pose under grebet.De vigtigste bidrag i denne afhandling omfatter i) integrationen af observationer fra stationære eksterne kameraer i miljøet med observationer fra det dynamiske robotkamera, til at estimere 6D objekt poses med tilhørende estimater af usikkerheden, ii) en demonstration af brugen af de estimerede pose-usikkerheder til at træffe informerede robotbeslutninger og iii) "sample"-effektiv læring ved at udnytte den iboende hierarkiske struktur i robotopgaverne. Med disse bidrag mener vi, at dette arbejde bidrager til at muliggøre tidseffektiv og robust MM under usikre pose estimater.In Mobile Manipulation (MM), navigation and manipulation actions are commonly addressed sequentially. The time efficiency of MM can be improved by simultaneously planning pre-grasp manipulation actions while the robot performs the navigation actions. However, planning pre-grasp manipulation actions requires accurate 6D object poses, which are usually available only when the robot is close to and has a clear view of the objects. Further, pre-grasp planning with uncertain poses can lead to failures. This thesis explores how to provide reliable object poses for pre-grasp planning along with their associated uncertainties while the robot is still approaching the objects for grasping, and how to use these pose estimates to plan pre-grasp actions and make informed decisions to enhance the time efficiency and robustness of mobile manipulation.The first part of this thesis focuses on improving the object pose estimates while the robot approaches the objects for grasping. We develop a multi-view 6D pose distribution tracking framework that provides 6D object pose along with the associated uncertainties. The framework compensates for limited views of the robot camera while approaching by incorporating additional views from the stationary external cameras in the environment. The second part of this thesis focuses on making informed decisions to reduce the risk of failures due to uncertain object pose estimates. We develop a probabilistic inferencing framework to determine if the uncertainty in the object pose estimate is acceptable for successfully executing the action. The framework considers both the estimated uncertainty in the object pose and the acceptable uncertainty for successfully completing the robotic action to determine the likelihood of success. The final part of this thesis focuses on pre-grasp planning to improve the time efficiency of mobile manipulation with online planning. We develop learning-based methods for pre-grasp planning that exploit the inherent hierarchical structure of pre-grasp planning tasks to improve sample efficiency during learning. First, we learn to plan a pre-grasp approaching motion before grasping. Furthermore, we learn to plan the object’s grasp sequence and base poses for grasping.The main contributions of this thesis include i) the integration of observations from stationary external cameras in the environment with the dynamic robot camera for 6D object pose estimation and associated uncertainty quantification, ii) a demonstration of the use of quantified pose uncertainties to make informed robotic decisions, and iii) sample-efficient learning by exploiting the inherent hierarchical structure of the tasks. With these contributions, we believe this work contributes to enabling time-efficient and robust MM under uncertain pose estimates.<br/
Pre-grasp approaching on mobile robots:a pre-active layered approach
In Mobile Manipulation (MM), navigation and manipulation are generally solved as subsequent disjoint tasks. Combined optimization of navigation and manipulation costs can improve the time efficiency of MM. However, this is challenging as precise object pose estimates, which are necessary for such combined optimization, are often not available until the later stages of MM. Moreover, optimizing navigation and manipulation costs with conventional planning methods using uncertain object pose estimates can lead to failures and hence requires re-planning. Hence, in the presence of object pose uncertainty, pre-active approaches are preferred. We propose such a pre-active approach for determining the base pose and pre-grasp manipulator configuration to improve the time efficiency of MM. We devise a Reinforcement Learning (RL) based solution that learns suitable base poses for grasping and pre-grasp manipulator configurations using layered learning that guides exploration and enables sample-efficient learning. Further, we accelerate learning of pre-grasp manipulator configurations by providing dense rewards using a predictor network trained on previously learned base poses for grasping. Our experiments validate that in the presence of uncertain object pose estimates, the proposed approach results in reduced execution time. Finally, we show that our policy learned in simulation can be easily transferred to a real robot.</p
Enabling robots to adhere to social norms by detecting F-formations
Robot navigation in environments shared with humans should take into account social structures and interactions. The identification of social groups has been a challenge for robotics as it encompasses a number of disciplines. We propose a hierarchical clustering method for grouping individuals into free standing conversational groups (FSCS), utilising their position and orientation. The proposed method is evaluated on the SALSA dataset with achieved F1 score of 0.94. The algorithm is also evaluated for scalability and implemented on a mobile robot attempting to detect social groups and engage in interaction.</p
Robotic task success evaluation under multi-modal non-parametric object pose uncertainty
PurposeAccurate 6D object pose estimation is essential for various robotic tasks. Uncertain pose estimates can lead to task failures; however, a certain degree of error in the pose estimates is often acceptable. This paper aims to enable the robots to make informed decisions by quantifying errors in the object pose estimate and acceptable errors for task success.Design/methodology/approachIn this paper, the authors introduce a framework for evaluating robotic task success under object pose uncertainty, representing both the estimated error space of the object pose and the acceptable error space for task success using multi-modal non-parametric probability distributions. The proposed framework pre-computes the acceptable error space for task success using dynamic simulations and subsequently integrates the pre-computed acceptable error space over the estimated error space of the object pose to predict the likelihood of the task succes.FindingsThe authors evaluated the proposed framework on two mobile manipulation tasks. Their results show that by representing the estimated and the acceptable error space using multi-modal non-parametric distributions, the authors achieve higher task success rates and fewer failures.Research limitations/implicationsTheir proposed framework is generic and can be applied to a wide range of robotic tasks requiring object pose estimation. Hence, given the recent advancements in object pose uncertainty estimation and dynamic simulations, the proposed framework, in conjunction with these advancements, has the potential to enable robots to make reliable and informed decisions under pose uncertainty.Originality/valueUnlike related works that model both acceptable error space and estimated error space using parametric uni-modal distributions, the authors model them as multi-modal distributions which is often the case in the real world
Multi-view object pose distribution tracking for pre-grasp planning on mobile robots
The ability to track the 6D pose distribution of an object when a mobile manipulator robot is still approaching the object can enable the robot to pre-plan grasps that combine base and arm motion. However, tracking a 6D object pose distribution from a distance can be challenging due to the limited view of the robot camera. In this work, we present a framework that fuses observations from external stationary cameras with a moving robot camera and sequentially tracks it in time to enable 6D object pose distribution tracking from a distance. We model the object pose posterior as a multi-modal distribution which results in a better performance against uncertainties introduced by large camera-object distance, occlusions and object geometry. We evaluate the proposed framework on a simulated multi-view dataset using objects from the YCB data set. Results show that our framework enables accurate tracking even when the robot camera has poor visibility of the object.</p
BaSeNet:A Learning-based Mobile Manipulator Base Pose Sequence Planning for Pickup Tasks
In many applications, a mobile manipulator robot is required to grasp a set of objects distributed in space. This may not be feasible from a single base pose and the robot must plan the sequence of base poses for grasping all objects, minimizing the total navigation and grasping time. This is a Combinatorial Optimization problem that can be solved using exact methods, which provide optimal solutions but are computationally expensive, or approximate methods, which offer computationally efficient but sub-optimal solutions. Recent studies have shown that learning-based methods can solve Combinatorial Optimization problems, providing near-optimal and computationally efficient solutions.In this work, we present BaSeNeT - a learning-based approach to plan the sequence of base poses for the robot to grasp all the objects in the scene. We propose a Reinforcement Learning based solution that learns the base poses for grasping individual objects and the sequence in which the objects should be grasped to minimize the total navigation and grasping costs using Layered Learning. As the problem has a varying number of states and actions, we represent states and actions as a graph and use Graph Neural Networks for learning. We show that the proposed method can produce comparable solutions to exact and approximate methods with significantly less computation time
Improving Throughput of Mobile Robots in Narrow Aisles
Emergency brakes applied by mobile robots to avoid collision with humans often block the traffic in narrow hallways. The ability to smoothly navigate in such environments can enable the deployment of robots in shared spaces with humans such as hospitals, cafeterias and so on. The standard navigation stacks used by these robots only use spatial information of the environment while planning its motion. In this work, we propose a predictive approach for handling dynamic objects such as humans. The use of this temporal information enables a mobile robot to predict collisions early enough and avoid the use of emergency brakes. We validated our approach in a real-world set-up at a busy university hallway. Our experiments show that the proposed approach results in fewer stops compared to the standard navigation stack only using spatial information
Planning Base Poses and Object Grasp Choices for Table-Clearing Tasks Using Dynamic Programming
Given a setup with external cameras and a mobile manipulator with an eye-in-hand camera, we address theproblem of computing a sequence of base poses and grasp choices that allows for clearing objects from atable while minimizing the overall execution time. The first step in our approach is to construct a worldmodel, which is generated by an anchoring process, using information from the external cameras. Next, wedeveloped a planning module which – based on the contents of the world model - is able to create a plausibleplan for reaching base positions and suitable grasp choices keeping execution time minimal. Comparing ourapproach to two baseline methods shows that the average execution cost of plans computed by our approach is40% lower than the naive baseline and 33% lower than the heuristic-based baseline. Furthermore, we integrateour approach in a demonstrator, undertaking the full complexity of the problem
A Design Space Exploration of Creative Concepts for Care Robots: Questioning the Differentiation of Social and Physical Assistance
In an interdisciplinary project, creative concepts for care robotics were developed. To explore the design space that these open up, we discussed them along the common differentiation of physical (effective) and social-emotional assistance. Trying to rate concepts on these dimensions frequently raised questions regarding the relation between the social-emotional and the physical, and highlighted gaps and a lack of conceptual clarity. We here present our design concepts, report on our discussion, and summarize our insights; in particular we suggest that the social and the physical dimension of care technologies should always be thought of and designed as interrelated
The SMOOTH-Robot:A Modular, Interactive Service Robot
The SMOOTH-robot is a mobile robot that—due to its modularity—combines a relatively low price with the possibility to be used for a large variety of tasks in a wide range of domains. In this article, we demonstrate the potential of the SMOOTH-robot through three use cases, two of which were performed in elderly care homes. The robot is designed so that it can either make itself ready or be quickly changed by staff to perform different tasks. We carefully considered important design parameters such as the appearance, intended and unintended interactions with users, and the technical complexity, in order to achieve high acceptability and a sufficient degree of utilization of the robot. Three demonstrated use cases indicate that such a robot could contribute to an improved work environment, having the potential to free resources of care staff which could be allocated to actual care-giving tasks. Moreover, the SMOOTH-robot can be used in many other domains, as we will also exemplify in this article.</p
