267 research outputs found

    Motion planning in observations space with learned diffeomorphism models

    Full text link
    We consider the problem of planning motions in observations space, based on learned models of the dynamics that associate to each action a diffeomorphism of the observations domain. For an arbitrary set of diffeomorphisms, this problem must be formulated as a generic search problem. We adapt established algorithms of the graph search family. In this scenario, node expansion is very costly, as each node in the graph is associated to an uncertain diffeomorphism and corresponding predicted observations. We describe several improvements that ameliorate performance: the introduction of better image similarities to use as heuristics; a method to reduce the number of expanded nodes by preliminarily identifying redundant plans; and a method to pre-compute composite actions that make the search efficient in all directions

    Learning to represent surroundings, anticipate motion and take informed actions in unstructured environments

    Get PDF
    Contemporary robots have become exceptionally skilled at achieving specific tasks in structured environments. However, they often fail when faced with the limitless permutations of real-world unstructured environments. This motivates robotics methods which learn from experience, rather than follow a pre-defined set of rules. In this thesis, we present a range of learning-based methods aimed at enabling robots, operating in dynamic and unstructured environments, to better understand their surroundings, anticipate the actions of others, and take informed actions accordingly

    Learning Deep Robotic Skills on Riemannian manifolds

    Full text link
    In this paper, we propose RiemannianFlow, a deep generative model that allows robots to learn complex and stable skills evolving on Riemannian manifolds. Examples of Riemannian data in robotics include stiffness (symmetric and positive definite matrix (SPD)) and orientation (unit quaternion (UQ)) trajectories. For Riemannian data, unlike Euclidean ones, different dimensions are interconnected by geometric constraints which have to be properly considered during the learning process. Using distance preserving mappings, our approach transfers the data between their original manifold and the tangent space, realizing the removing and re-fulfilling of the geometric constraints. This allows to extend existing frameworks to learn stable skills from Riemannian data while guaranteeing the stability of the learning results. The ability of RiemannianFlow to learn various data patterns and the stability of the learned models are experimentally shown on a dataset of manifold motions. Further, we analyze from different perspectives the robustness of the model with different hyperparameter combinations. It turns out that the model's stability is not affected by different hyperparameters, a proper combination of the hyperparameters leads to a significant improvement (up to 27.6%) of the model accuracy. Last, we show the effectiveness of RiemannianFlow in a real peg-in-hole (PiH) task where we need to generate stable and consistent position and orientation trajectories for the robot starting from different initial poses

    Transformations Based on Continuous Piecewise-Affine Velocity Fields

    Get PDF

    Learning Lyapunov-Stable Polynomial Dynamical Systems Through Imitation

    Full text link
    Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert's behavior. However, relying solely on the expert's data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial's coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations.Comment: In 7th Annual Conference on Robot Learning 2023 Aug 3

    Medical image analysis via Fréchet means of diffeomorphisms

    Get PDF
    The construction of average models of anatomy, as well as regression analysis of anatomical structures, are key issues in medical research, e.g., in the study of brain development and disease progression. When the underlying anatomical process can be modeled by parameters in a Euclidean space, classical statistical techniques are applicable. However, recent work suggests that attempts to describe anatomical differences using flat Euclidean spaces undermine our ability to represent natural biological variability. In response, this dissertation contributes to the development of a particular nonlinear shape analysis methodology. This dissertation uses a nonlinear deformable model to measure anatomical change and define geometry-based averaging and regression for anatomical structures represented within medical images. Geometric differences are modeled by coordinate transformations, i.e., deformations, of underlying image coordinates. In order to represent local geometric changes and accommodate large deformations, these transformations are taken to be the group of diffeomorphisms with an associated metric. A mean anatomical image is defined using this deformation-based metric via the Fréchet mean—the minimizer of the sum of squared distances. Similarly, a new method called manifold kernel regression is presented for estimating systematic changes—as a function of a predictor variable, such as age—from data in nonlinear spaces. It is defined by recasting kernel regression in terms of a kernel-weighted Fréchet mean. This method is applied to determine systematic geometric changes in the brain from a random design dataset of medical images. Finally, diffeomorphic image mapping is extended to accommodate extraneous structures—objects that are present in one image and absent in another and thus change image topology—by deflating them prior to the estimation of geometric change. The method is applied to quantify the motion of the prostate in the presence of transient bowel gas

    Motion Mappings for Continuous Bilateral Teleoperation

    Full text link
    Mapping operator motions to a robot is a key problem in teleoperation. Due to differences between workspaces, such as object locations, it is particularly challenging to derive smooth motion mappings that fulfill different goals (e.g. picking objects with different poses on the two sides or passing through key points). Indeed, most state-of-the-art methods rely on mode switches, leading to a discontinuous, low-transparency experience. In this paper, we propose a unified formulation for position, orientation and velocity mappings based on the poses of objects of interest in the operator and robot workspaces. We apply it in the context of bilateral teleoperation. Two possible implementations to achieve the proposed mappings are studied: an iterative approach based on locally-weighted translations and rotations, and a neural network approach. Evaluations are conducted both in simulation and using two torque-controlled Franka Emika Panda robots. Our results show that, despite longer training times, the neural network approach provides faster mapping evaluations and lower interaction forces for the operator, which are crucial for continuous, real-time teleoperation.Comment: Accepted for publication at the IEEE Robotics and Automation Letters (RA-L

    Semantic Simultaneous Localization And Mapping

    Get PDF
    Traditional approaches to simultaneous localization and mapping (SLAM) rely on low-level geometric features such as points, lines, and planes. They are unable to assign semantic labels to landmarks observed in the environment. Recent advances in object recognition and semantic scene understanding, however, have made this information easier to extract than ever before, and the recent proliferation of robots in human environments demand access to reliable semantic-level mapping and localization algorithms to enable true autonomy. Furthermore, loop closure recognition based on low-level features is often viewpoint dependent and subject to failure in ambiguous or repetitive environments, whereas object recognition methods can infer landmark classes and scales, resulting in a small set of easily recognizable landmarks. In this thesis, we present two solutions that incorporate semantic information into a full localization and mapping pipeline. In the first, we propose a solution method using only single-image bounding box object detections as the semantic measurement. As these bounding box measurements are relatively imprecise when projected back into 3D space and difficult to associate with existing mapped objects, we first present a general method to probabilistically compute data associations within an estimation framework and demonstrate its improved accuracy in the case of high-uncertainty measurements. We then extend this to the specific case of semantic bounding box measurements and demonstrate its accuracy in indoor and outdoor environments. Second, we propose a solution based on the detection of semantic keypoints. These semantic keypoints are not only more reliably positioned in space, but also allow us to estimate the full six degree-of-freedom pose of each mapped object. The usage of these semantic keypoints allows us to effectively reduce the problem of semantic mapping to that of the much more well studied problem of mapping point features, allowing for its efficient solution and robustness in practice. Finally, we present a method of robotic navigation in unexplored semantic environments that robustly plans paths through unknown and unexplored semantic environments towards a goal location. Through the use of the semantic keypoint-based semantic SLAM algorithm, we demonstrate the successful execution of navigation missions through on-the-fly generated semantic maps

    Topology-Matching Normalizing Flows for Out-of-Distribution Detection in Robot Learning

    Full text link
    To facilitate reliable deployments of autonomous robots in the real world, Out-of-Distribution (OOD) detection capabilities are often required. A powerful approach for OOD detection is based on density estimation with Normalizing Flows (NFs). However, we find that prior work with NFs attempts to match the complex target distribution topologically with naive base distributions leading to adverse implications. In this work, we circumvent this topological mismatch using an expressive class-conditional base distribution trained with an information-theoretic objective to match the required topology. The proposed method enjoys the merits of wide compatibility with existing learned models without any performance degradation and minimum computation overhead while enhancing OOD detection capabilities. We demonstrate superior results in density estimation and 2D object detection benchmarks in comparison with extensive baselines. Moreover, we showcase the applicability of the method with a real-robot deployment.Comment: Accepted on CoRL202
    • …
    corecore