161 research outputs found

    Fast, Optimal, and Safe Motion Planning for Bipedal Robots

    Full text link
    Bipedal robots have the potential to traverse a wide range of unstructured environments, which are otherwise inaccessible to wheeled vehicles. Though roboticists have successfully constructed controllers for bipedal robots to walk over uneven terrain such as snow, sand, or even stairs, it has remained challenging to synthesize such controllers in an online fashion while guaranteeing their satisfactory performance. This is primarily due to the lack of numerical method that can accommodate the non-smooth dynamics, high degrees of freedom, and underactuation that characterize bipedal robots. This dissertation proposes and implements a family of numerical methods that begin to address these three challenges along three dimensions: optimality, safety, and computational speed. First, this dissertation develops a convex relaxation-based approach to solve optimal control for hybrid systems without a priori knowledge of the optimal sequence of transition. This is accomplished by formulating the problem in the space of relaxed controls, which gives rise to a linear program whose solution is proven to compute the globally optimal controller. This conceptual program is solved using a sequence of semidefinite programs whose solutions are proven to converge from below to the true optimal solution of the original optimal control problem. Moreover, a method to synthesize the optimal controller is developed. Using an array of examples, the performance of this method is validated on problems with known solutions and also compared to a commercial solver. Second, this dissertation constructs a method to generate safety-preserving controllers for a planar bipedal robot walking on flat ground by performing reachability analysis on simplified models under the assumption that the difference between the two models can be bounded. Subsequently, this dissertation describes how this reachable set can be incorporated into a Model Predictive Control framework to select controllers that result in safe walking on the biped in an online fashion. This method is validated on a 5-link planar model. Third, this dissertation proposes a novel parallel algorithm capable of finding guaranteed optimal solutions to polynomial optimization problems up to pre-specified tolerances. Formal proofs of bounds on the time and memory usage of such method are also given. Such algorithm is implemented in parallel on GPUs and compared against state-of-the-art solvers on a group of benchmark examples. An application of such method on a real-time trajectory-planning task of a mobile robot is also demonstrated. Fourth, this dissertation constructs an online Model Predictive Control framework that guarantees safety of a 3D bipedal robot walking in a forest of randomly-placed obstacles. Using numerical integration and interval arithmetic techniques, approximations to trajectories of the robot are constructed along with guaranteed bounds on the approximation error. Safety constraints are derived using these error bounds and incorporated in a Model Predictive Control framework whose feasible solutions keep the robot from falling over and from running into obstacles. To ensure that the bipedal robot is able to avoid falling for all time, a finite-time terminal constraint is added to the Model Predictive Control algorithm. The performance of this method is implemented and compared against a naive Model Predictive Control method on a biped model with 20 degrees of freedom. In summary, this dissertation presents four methods for control synthesis of bipedal robots with improvements in either optimality, safety guarantee, or computational speed. Furthermore, the performance of all proposed methods are compared with existing methods in the field.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162880/1/pczhao_1.pd

    Scaling Robot Motion Planning to Multi-core Processors and the Cloud

    Get PDF
    Imagine a world in which robots safely interoperate with humans, gracefully and efficiently accomplishing everyday tasks. The robot's motions for these tasks, constrained by the design of the robot and task at hand, must avoid collisions with obstacles. Unfortunately, planning a constrained obstacle-free motion for a robot is computationally complex---often resulting in slow computation of inefficient motions. The methods in this dissertation speed up this motion plan computation with new algorithms and data structures that leverage readily available parallel processing, whether that processing power is on the robot or in the cloud, enabling robots to operate safer, more gracefully, and with improved efficiency. The contributions of this dissertation that enable faster motion planning are novel parallel lock-free algorithms, fast and concurrent nearest neighbor searching data structures, cache-aware operation, and split robot-cloud computation. Parallel lock-free algorithms avoid contention over shared data structures, resulting in empirical speedup proportional to the number of CPU cores working on the problem. Fast nearest neighbor data structures speed up searching in SO(3) and SE(3) metric spaces, which are needed for rigid body motion planning. Concurrent nearest neighbor data structures improve searching performance on metric spaces common to robot motion planning problems, while providing asymptotic wait-free concurrent operation. Cache-aware operation avoids long memory access times, allowing the algorithm to exhibit superlinear speedup. Split robot-cloud computation enables robots with low-power CPUs to react to changing environments by having the robot compute reactive paths in real-time from a set of motion plan options generated in a computationally intensive cloud-based algorithm. We demonstrate the scalability and effectiveness of our contributions in solving motion planning problems both in simulation and on physical robots of varying design and complexity. Problems include finding a solution to a complex motion planning problem, pre-computing motion plans that converge towards the optimal, and reactive interaction with dynamic environments. Robots include 2D holonomic robots, 3D rigid-body robots, a self-driving 1/10 scale car, articulated robot arms with and without mobile bases, and a small humanoid robot.Doctor of Philosoph

    Robust Scene Estimation for Goal-directed Robotic Manipulation in Unstructured Environments

    Full text link
    To make autonomous robots "taskable" so that they function properly and interact fluently with human partners, they must be able to perceive and understand the semantic aspects of their environments. More specifically, they must know what objects exist and where they are in the unstructured human world. Progresses in robot perception, especially in deep learning, have greatly improved for detecting and localizing objects. However, it still remains a challenge for robots to perform a highly reliable scene estimation in unstructured environments that is determined by robustness, adaptability and scale. In this dissertation, we address the scene estimation problem under uncertainty, especially in unstructured environments. We enable robots to build a reliable object-oriented representation that describes objects present in the environment, as well as inter-object spatial relations. Specifically, we focus on addressing following challenges for reliable scene estimation: 1) robust perception under uncertainty results from noisy sensors, objects in clutter and perceptual aliasing, 2) adaptable perception in adverse conditions by combined deep learning and probabilistic generative methods, 3) scalable perception as the number of objects grows and the structure of objects becomes more complex (e.g. objects in dense clutter). Towards realizing robust perception, our objective is to ground raw sensor observations into scene states while dealing with uncertainty from sensor measurements and actuator control . Scene states are represented as scene graphs, where scene graphs denote parameterized axiomatic statements that assert relationships between objects and their poses. To deal with the uncertainty, we present a pure generative approach, Axiomatic Scene Estimation (AxScEs). AxScEs estimates a probabilistic distribution across plausible scene graph hypotheses describing the configuration of objects. By maintaining a diverse set of possible states, the proposed approach demonstrates the robustness to the local minimum in the scene graph state space and effectiveness for manipulation-quality perception based on edit distance on scene graphs. To scale up to more unstructured scenarios and be adaptable to adversarial scenarios, we present Sequential Scene Understanding and Manipulation (SUM), which estimates the scene as a collection of objects in cluttered environments. SUM is a two-stage method that leverages the accuracy and efficiency from convolutional neural networks (CNNs) with probabilistic inference methods. Despite the strength from CNNs, they are opaque in understanding how the decisions are made and fragile for generalizing beyond overfit training samples in adverse conditions (e.g., changes in illumination). The probabilistic generative method complements these weaknesses and provides an avenue for adaptable perception. To scale up to densely cluttered environments where objects are physically touching with severe occlusions, we present GeoFusion, which fuses noisy observations from multiple frames by exploring geometric consistency at object level. Geometric consistency characterizes geometric compatibility between objects and geometric similarity between observations and objects. It reasons about geometry at the object-level, offering a fast and reliable way to be robust to semantic perceptual aliasing. The proposed approach demonstrates greater robustness and accuracy than the state-of-the-art pose estimation approach.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163060/1/zsui_1.pd

    Deep Reinforcement Learning for the Velocity Control of a Magnetic, Tethered Differential-Drive Robot

    Get PDF
    The ROBOPLANET Altiscan crawler is a magnetic-wheeled, differential-drive robot being explored as an option to aid, if not completely replace, humans in the inspection and maintenance of marine vessels. Velocity control of the crawler is a crucial part in establishing trust and reliability amongst its operators. However, thanks to the crawler's elongated, magnetic wheels and umbilical tether, it operates in a complex environment rich with nonlinear dynamics which makes control challenging. Model-based approaches for the control of a robot that aim to mathematically formalize the physics of the system require an in-depth knowledge of the domain. Reinforcement learning (RL) is a trial-and-error-based approach that can solve control problems in nonlinear systems. To accommodate for high-dimensionality and continuous state spaces, deep neural networks (DNNs) can be used as nonlinear function approximators to extend RL, creating a method known as deep reinforcement learning (DRL). DRL coupled with a simulated environment provides a way for a model to learn physics-naive control. The research conducted in this thesis explored the efficacy of a DRL algorithm, proximal policy optimization (PPO), to learn the velocity control of the Altiscan crawler by modeling its operating environment in a novel, GPU-accelerated simulation software called Isaac Gym. The approaches evaluated the error between measured base velocities of the crawler as a result of the actions provided by the DRL model and target velocities in six different environments. Two variants of PPO, standard and recurrent, were compared against the inverse velocity kinematics model of a differential-drive robot. The results show that velocity control in simulation is possible using PPO, but evaluation on the real crawler is needed to come to a meaningful conclusion.M.S

    Multi-camera simultaneous localization and mapping

    Get PDF
    In this thesis, we study two aspects of simultaneous localization and mapping (SLAM) for multi-camera systems: minimal solution methods for the scaled motion of non-overlapping and partially overlapping two camera systems and enabling online, real-time mapping of large areas using the parallelism inherent in the visual simultaneous localization and mapping (VSLAM) problem. We present the only existing minimal solution method for six degree of freedom structure and motion estimation using a non-overlapping, rigid two camera system with known intrinsic and extrinsic calibration. One example application of our method is the three-dimensional reconstruction of urban scenes from video. Because our method does not require the cameras' fields-of-view to overlap, we are able to maximize coverage of the scene and avoid processing redundant, overlapping imagery. Additionally, we developed a minimal solution method for partially overlapping stereo camera systems to overcome degeneracies inherent to non-overlapping two-camera systems but still have a wide total field of view. The method takes two stereo images as its input. It uses one feature visible in all four views and three features visible across two temporal view pairs to constrain the system camera's motion. We show in synthetic experiments that our method creates rotation and translation estimates that are more accurate than the perspective three-point method as the overlap in the stereo camera's fields-of-view is reduced. A final part of this thesis is the development of an online, real-time visual SLAM system that achieves real-time speed by exploiting the parallelism inherent in the VSLAM problem. We show that feature tracking, relative pose estimation, and global mapping operations such as loop detection and loop correction can be effectively parallelized. Additionally, we demonstrate that a combination of short baseline, differentially tracked corner features, which can be tracked at high frame rates and wide baseline matchable but slower to compute features such as the scale-invariant feature transform can facilitate high speed visual odometry and at the same time support location recognition for loop detection and global geometric error correction

    Towards Safe Autonomy in Assistive Robots

    Full text link
    Robots have the potential to support older adults and persons with disabilities on a direct and personal level. For example, a wearable robot may help a person stand up from a chair, or a robotic manipulator may aid a person with meal preparation and housework. Assistive robots can autonomously make decisions about how best to support a person. However, this autonomy is potentially dangerous; robots can cause collisions or falls which may lead to serious injury. Therefore, guaranteeing that assistive robots operate safely is imperative. This dissertation advances safe autonomy in assistive robots by developing a suite of tools for the tasks of perception, monitoring, manipulation and all prevention. Each tool provides a theoretical guarantee of its correct performance, adding a necessary layer of trust and protection when deploying assistive robots. The topic of interaction, or how a human responds to the decisions made by assistive robots, is left for future work. Perception: Assistive robots must accurately perceive the 3D position of a person's body to avoid collisions and build predictive models of how a person moves. This dissertation formulates the problem of 3D pose estimation from multi-view 2D pose estimates as a sum-of-squares optimization problem. Sparsity is leveraged to efficiently solve the problem, which includes explicit constraints on the link lengths connecting any two joints. The method certifies the global optimality of its solutions over 99 percent of the time, and matches or exceeds state-of-the-art accuracy while requiring less computation time and no 3D training data. Monitoring: Assistive robots may mitigate fall risk by monitoring changes to a person’s stability over time and predicting instabilities in real time. This dissertation presents Stability Basins which characterize stability during human motion, with a focus on sit-to-stand. An 11-person experiment was conducted in which subjects were pulled by motor-driven cables as they stood from a chair. Stability Basins correctly predicted instability (stepping or sitting) versus task success with over 90 percent accuracy across three distinct sit-to-stand strategies. Manipulation: Robotic manipulators can support many common activities like feeding, dressing, and cleaning. This dissertation details ARMTD (Autonomous Reachability-based Manipulator Trajectory Design) for receding-horizon planning of collision-free manipulator trajectories. ARMTD composes reachable sets of the manipulator through workspace from low dimensional trajectories of each joint. ARMTD creates strict collision-avoidance constraints from these sets, which are enforced within an online trajectory optimization. The method is demonstrated for real-time planning in simulation and on hardware on a Fetch Mobile Manipulator robot, where it never causes a collision. Fall Prevention: Wearable robots may prevent falls by quickly reacting when a user trips or slips. This dissertation presents TRIP-RTD (Trip Recovery in Prostheses via Reachability-based Trajectory Design), which extends the ARMTD framework to robotic prosthetic legs. TRIP-RTD uses predictions of a person’s response to a trip to plan recovery trajectories of a prosthetic leg. TRIP-RTD creates constraints for an online trajectory optimization which ensure the prosthetic foot is placed correctly across a range of plausible human responses. The approach is demonstrated in simulation using data of non-amputee subjects being tripped.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169822/1/pdholmes_1.pd

    Receding-horizon motion planning of quadrupedal robot locomotion

    Get PDF
    Quadrupedal robots are designed to offer efficient and robust mobility on uneven terrain. This thesis investigates combining numerical optimization and machine learning methods to achieve interpretable kinodynamic planning of natural and agile locomotion. The proposed algorithm, called Receding-Horizon Experience-Controlled Adaptive Legged Locomotion (RHECALL), uses nonlinear programming (NLP) with learned initialization to produce long-horizon, high-fidelity, terrain-aware, whole-body trajectories. RHECALL has been implemented and validated on the ANYbotics ANYmal B and C quadrupeds on complex terrain. The proposed optimal control problem formulation uses the single-rigid-body dynamics (SRBD) model and adopts a direct collocation transcription method which enables the discovery of aperiodic contact sequences. To generate reliable trajectories, we propose fast-to-compute analytical costs that leverage the discretization and terrain-dependent kinematic constraints. To extend the formulation to receding-horizon planning, we propose a segmentation approach with asynchronous centre of mass (COM) and end-effector timings and a heuristic initialization scheme which reuses the previous solution. We integrate real-time 2.5D perception data for online foothold selection. Additionally, we demonstrate that a learned stability criterion can be incorporated into the planning framework. To accelerate the convergence of the NLP solver to locally optimal solutions, we propose data-driven initialization schemes trained using supervised and unsupervised behaviour cloning. We demonstrate the computational advantage of the schemes and the ability to leverage latent space to reconstruct dynamic segments of plans which are several seconds long. Finally, in order to apply RHECALL to quadrupeds with significant leg inertias, we derive the more accurate lump leg single-rigid-body dynamics (LL-SRBD) and centroidal dynamics (CD) models and their first-order partial derivatives. To facilitate intuitive usage of costs, constraints and initializations, we parameterize these models by Euclidean-space variables. We show the models have the ability to shape rotational inertia of the robot which offers potential to further improve agility

    Aircraft Trajectory Planning Considering Ensemble Forecasting of Thunderstorms

    Get PDF
    Mención Internacional en el título de doctorConvective weather poses a major threat that compromises the safe operation of flights while inducing delay and cost. The aircraft trajectory planning problem under thunderstorm evolution is addressed in this thesis, proposing two novel heuristic approaches that incorporate uncertainties in the evolution of convective cells. In this context, two additional challenges are faced. On the one hand, studies have demonstrated that given the computational power available nowadays, the best way to characterize weather uncertainties is through ensemble forecasting products, hence compatibility with them is crucial. On the other hand, for the algorithms to be used during a flight, they must be fast and deliver results in a few seconds. As a first methodology, three variants of the Scenario-Based Rapidly-Exploring Random Trees (SB-RRTs) are proposed. Each of them builds a tree to explore the free airspace during an iterative and random process. The so-called SB-RRT, the SB-RRT∗ and the Informed SB-RRT∗ find point-to-point safe trajectories by meeting a user-defined safety threshold. Additionally, the last two techniques converge to solutions of minimum flight length. In a second instance, the Augmented Random Search (ARS) algorithm is used to sample trajectories from a directed graph and deform them iteratively in the search for an optimal path. The aim of such deformations is to adapt the initial graph to the unsafe set and its possible changes. In the end, the ARS determines the population of trajectories that, on average, minimizes a combination of flight time, time in storms, and fuel consumption Both methodologies are tested considering a dynamic model of an aircraft flying between two waypoints at a constant flight level. Test scenarios consist of realistic weather forecasts described by an ensemble of equiprobable members. Moreover, the influence of relevant parameters, such as the maximum number of iterations, safety margin (in SB-RRTs) or relative weights between objectives (in ARS) is analyzed. Since both algorithms and their convergence processes are random, sensitivity analyses are conducted to show that after enough iterations the results match. Finally, through parallelization on graphical processing units, the required computational times are reduced substantially to become compatible with near real-time operation. In either case, results show that the suggested approaches are able to avoid dangerous and uncertain stormy regions, minimize objectives such as time of flight, flown distance or fuel consumption and operate in less than 10 seconds.Los fenómenos convectivos representan una gran amenaza que compromete la seguridad de los vuelos, a la vez que incrementa los retrasos y costes. En esta tesis se aborda el problema de la planificación de vuelos bajo la influencia de tormentas, proponiendo dos nuevos métodos heurísticos que incorporan incertidumbre en la evolución de las células convectivas. En este contexto, se intentará dar respuesta a dos desafíos adicionales. Por un lado, hay estudios que demuestran que, con los recursos computacionales disponibles hoy en día, la mejor manera de caracterizar la incertidumbre meteorológica es mediante productos de tipo “ensemble”. Por tanto, la compatibilidad con ellos es crucial. Por otro lado, para poder emplear los algoritmos durante el vuelo, deben de ser rápidos y obtener resultados en pocos segundos. Como primera aproximación, se proponen tres variantes de los “Scenario-Based Rapidly-Exploring Random Trees” (SB-RRTs). Cada uno de ellos crea un árbol que explora el espacio seguro durante un proceso iterativo y aleatorio. Los denominados SB-RRT, SB-RRT∗ e Informed SB-RRT∗ calculan trayectorias entre dos puntos respetando un margen de seguridad impuesto por el usuario. Además, los dos últimos métodos convergen en soluciones de mínima distancia de vuelo. En segundo lugar, el algoritmo “Augmented Random Search” (ARS) se utiliza para muestrear trajectorias de un grafo dirigido y deformarlas iterativamente en busca del camino óptimo. El fin de tales deformaciones es adaptar el grafo inicial a las zonas peligrosas y a los cambios que puedan sufrir. Finalmente, el ARS calcula aquella población de trayectorias que, de media, minimiza una combinación del tiempo de vuelo, el tiempo en zonas tormentosas y el consumo de combustible. Ambas metodologías se testean considerando un modelo de avión volando punto a punto a altitud constante. Los casos de prueba se basan en datos meteorológicos realistas formados por un grupo de predicciones equiprobables. Además, se analiza la influencia de los parámetros más importantes como el máximo número de iteraciones, el margen de seguridad (en SB-RRTs) o los pesos relativos de cada objetivo (en ARS). Como ambos algoritmos y sus procesos de convergencia son aleatorios, se realizan análisis de sensibilidad para mostrar que, tras suficientes iteraciones, los resultados coinciden. Por último, mediante técnicas de paralelización en procesadores gráficos, se reducen enormemente los tiempos de cálculo, siendo compatibles con una operación en tiempo casi-real. En ambos casos los resultados muestran que los algoritmos son capaces de evitar zonas inciertas de tormenta, minimizar objetivos como el tiempo de vuelo, la distancia recorrida o el consumo de combustible, en menos de 10 segundos de ejecución.Programa de Doctorado en Ingeniería Aeroespacial por la Universidad Carlos III de MadridPresidente: Ernesto Staffetti Giammaria.- Secretario: Alfonso Valenzuela Romero.- Vocal: Valentin Polishchu
    corecore