1,996 research outputs found

    Online Trajectory Optimization Using Inexact Gradient Feedback for Time-Varying Environments

    Full text link
    This paper considers the problem of online trajectory design under time-varying environments. We formulate the general trajectory optimization problem within the framework of time-varying constrained convex optimization and proposed a novel version of the online gradient ascent algorithm for such problems. Moreover, the gradient feedback is noisy and allows us to use the proposed algorithm for a range of practical applications where it is difficult to acquire the true gradient. In contrast to the most available literature, we present the offline sublinear regret of the proposed algorithm up to the path length variations of the optimal offline solution, the cumulative gradient, and the error in the gradient variations. Furthermore, we establish a lower bound on the offline dynamic regret, which defines the optimality of any trajectory. To show the efficacy of the proposed algorithm, we consider two practical problems of interest. First, we consider a device to device (D2D) communications setting, and the goal is to design a user trajectory while maximizing its connectivity to the internet. The second problem is associated with the online planning of energy-efficient trajectories for unmanned surface vehicles (USV) under strong disturbances in ocean environments with both static and dynamic goal locations. The detailed simulation results demonstrate the significance of the proposed algorithm on synthetic and real data sets. Video on the real-world datasets can be found at {https://www.youtube.com/watch?v=FcRqqWtpf\_0}Comment: arXiv admin note: text overlap with arXiv:1804.0486

    Optimal control and approximations

    Get PDF

    Optimal control and approximations

    Get PDF

    A practical guide to multi-objective reinforcement learning and planning

    Get PDF
    Real-world sequential decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems. © 2022, The Author(s)

    Reinforcement Learning and Planning for Preference Balancing Tasks

    Get PDF
    Robots are often highly non-linear dynamical systems with many degrees of freedom, making solving motion problems computationally challenging. One solution has been reinforcement learning (RL), which learns through experimentation to automatically perform the near-optimal motions that complete a task. However, high-dimensional problems and task formulation often prove challenging for RL. We address these problems with PrEference Appraisal Reinforcement Learning (PEARL), which solves Preference Balancing Tasks (PBTs). PBTs define a problem as a set of preferences that the system must balance to achieve a goal. The method is appropriate for acceleration-controlled systems with continuous state-space and either discrete or continuous action spaces with unknown system dynamics. We show that PEARL learns a sub-optimal policy on a subset of states and actions, and transfers the policy to the expanded domain to produce a more refined plan on a class of robotic problems. We establish convergence to task goal conditions, and even when preconditions are not verifiable, show that this is a valuable method to use before other more expensive approaches. Evaluation is done on several robotic problems, such as Aerial Cargo Delivery, Multi-Agent Pursuit, Rendezvous, and Inverted Flying Pendulum both in simulation and experimentally. Additionally, PEARL is leveraged outside of robotics as an array sorting agent. The results demonstrate high accuracy and fast learning times on a large set of practical applications

    Estimation and stability of nonlinear control systems under intermittent information with applications to multi-agent robotics

    Get PDF
    This dissertation investigates the role of intermittent information in estimation and control problems and applies the obtained results to multi-agent tasks in robotics. First, we develop a stochastic hybrid model of mobile networks able to capture a large variety of heterogeneous multi-agent problems and phenomena. This model is applied to a case study where a heterogeneous mobile sensor network cooperatively detects and tracks mobile targets based on intermittent observations. When these observations form a satisfactory target trajectory, a mobile sensor is switched to the pursuit mode and deployed to capture the target. The cost of operating the sensors is determined from the geometric properties of the network, environment and probability of target detection. The above case study is motivated by the Marco Polo game played by children in swimming pools. Second, we develop adaptive sampling of targets positions in order to minimize energy consumption, while satisfying performance guarantees such as increased probability of detection over time, and no-escape conditions. A parsimonious predictor-corrector tracking filter, that uses geometrical properties of targets\u27 tracks to estimate their positions using imperfect and intermittent measurements, is presented. It is shown that this filter requires substantially less information and processing power than the Unscented Kalman Filter and Sampling Importance Resampling Particle Filter, while providing comparable estimation performance in the presence of intermittent information. Third, we investigate stability of nonlinear control systems under intermittent information. We replace the traditional periodic paradigm, where the up-to-date information is transmitted and control laws are executed in a periodic fashion, with the event-triggered paradigm. Building on the small gain theorem, we develop input-output triggered control algorithms yielding stable closed-loop systems. In other words, based on the currently available (but outdated) measurements of the outputs and external inputs of a plant, a mechanism triggering when to obtain new measurements and update the control inputs is provided. Depending on the noise environment, the developed algorithm yields stable, asymptotically stable, and Lp-stable (with bias) closed-loop systems. Control loops are modeled as interconnections of hybrid systems for which novel results on Lp-stability are presented. Prediction of a triggering event is achieved by employing Lp-gains over a finite horizon in the small gain theorem. By resorting to convex programming, a method to compute Lp-gains over a finite horizon is devised. Next, we investigate optimal intermittent feedback for nonlinear control systems. Using the currently available measurements from a plant, we develop a methodology that outputs when to update the control law with new measurements such that a given cost function is minimized. Our cost function captures trade-offs between the performance and energy consumption of the control system. The optimization problem is formulated as a Dynamic Programming problem, and Approximate Dynamic Programming is employed to solve it. Instead of advocating a particular approximation architecture for Approximate Dynamic Programming, we formulate properties that successful approximation architectures satisfy. In addition, we consider problems with partially observable states, and propose Particle Filtering to deal with partially observable states and intermittent feedback. Finally, we investigate a decentralized output synchronization problem of heterogeneous linear systems. We develop a self-triggered output broadcasting policy for the interconnected systems. Broadcasting time instants adapt to the current communication topology. For a fixed topology, our broadcasting policy yields global exponential output synchronization, and Lp-stable output synchronization in the presence of disturbances. Employing a converse Lyapunov theorem for impulsive systems, we provide an average dwell time condition that yields disturbance-to-state stable output synchronization in case of switching topology. Our approach is applicable to directed and unbalanced communication topologies.\u2

    Formal Methods for Autonomous Systems

    Full text link
    Formal methods refer to rigorous, mathematical approaches to system development and have played a key role in establishing the correctness of safety-critical systems. The main building blocks of formal methods are models and specifications, which are analogous to behaviors and requirements in system design and give us the means to verify and synthesize system behaviors with formal guarantees. This monograph provides a survey of the current state of the art on applications of formal methods in the autonomous systems domain. We consider correct-by-construction synthesis under various formulations, including closed systems, reactive, and probabilistic settings. Beyond synthesizing systems in known environments, we address the concept of uncertainty and bound the behavior of systems that employ learning using formal methods. Further, we examine the synthesis of systems with monitoring, a mitigation technique for ensuring that once a system deviates from expected behavior, it knows a way of returning to normalcy. We also show how to overcome some limitations of formal methods themselves with learning. We conclude with future directions for formal methods in reinforcement learning, uncertainty, privacy, explainability of formal methods, and regulation and certification

    Nonlinear Model Predictive Control for Motion Generation of Humanoids

    Get PDF
    Das Ziel dieser Arbeit ist die Untersuchung und Entwicklung numerischer Methoden zur Bewegungserzeugung von humanoiden Robotern basierend auf nichtlinearer modell-prädiktiver Regelung. Ausgehend von der Modellierung der Humanoiden als komplexe Mehrkörpermodelle, die sowohl durch unilaterale Kontaktbedingungen beschränkt als auch durch die Formulierung unteraktuiert sind, wird die Bewegungserzeugung als Optimalsteuerungsproblem formuliert. In dieser Arbeit werden numerische Erweiterungen basierend auf den Prinzipien der Automatischen Differentiation für rekursive Algorithmen, die eine effiziente Auswertung der dynamischen Größen der oben genannten Mehrkörperformulierung erlauben, hergeleitet, sodass sowohl die nominellen Größen als auch deren ersten Ableitungen effizient ausgewertet werden können. Basierend auf diesen Ideen werden Erweiterungen für die Auswertung der Kontaktdynamik und der Berechnung des Kontaktimpulses vorgeschlagen. Die Echtzeitfähigkeit der Berechnung von Regelantworten hängt stark von der Komplexität der für die Bewegungerzeugung gewählten Mehrkörperformulierung und der zur Verfügung stehenden Rechenleistung ab. Um einen optimalen Trade-Off zu ermöglichen, untersucht diese Arbeit einerseits die mögliche Reduktion der Mehrkörperdynamik und andererseits werden maßgeschneiderte numerische Methoden entwickelt, um die Echtzeitfähigkeit der Regelung zu realisieren. Im Rahmen dieser Arbeit werden hierfür zwei reduzierte Modelle hergeleitet: eine nichtlineare Erweiterung des linearen inversen Pendelmodells sowie eine reduzierte Modellvariante basierend auf der centroidalen Mehrkörperdynamik. Ferner wird ein Regelaufbau zur GanzkörperBewegungserzeugung vorgestellt, deren Hauptbestandteil jeweils aus einem speziell diskretisierten Problem der nichtlinearen modell-prädiktiven Regelung sowie einer maßgeschneiderter Optimierungsmethode besteht. Die Echtzeitfähigkeit des Ansatzes wird durch Experimente mit den Robotern HRP-2 und HeiCub verifiziert. Diese Arbeit schlägt eine Methode der nichtlinear modell-prädiktiven Regelung vor, die trotz der Komplexität der vollen Mehrkörperformulierung eine Berechnung der Regelungsantwort in Echtzeit ermöglicht. Dies wird durch die geschickte Kombination von linearer und nichtlinearer modell-prädiktiver Regelung auf der aktuellen beziehungsweise der letzten Linearisierung des Problems in einer parallelen Regelstrategie realisiert. Experimente mit dem humanoiden Roboter Leo zeigen, dass, im Vergleich zur nominellen Strategie, erst durch den Einsatz dieser Methode eine Bewegungserzeugung auf dem Roboter möglich ist. Neben Methoden der modell-basierten Optimalsteuerung werden auch modell-freie Methoden des verstärkenden Lernens (Reinforcement Learning) für die Bewegungserzeugung untersucht, mit dem Fokus auf den schwierig zu modellierenden Modellunsicherheiten der Roboter. Im Rahmen dieser Arbeit werden eine allgemeine vergleichende Studie sowie Leistungskennzahlen entwickelt, die es erlauben, modell-basierte und -freie Methoden quantitativ bezüglich ihres Lösungsverhaltens zu vergleichen. Die Anwendung der Studie auf ein akademisches Beispiel zeigt Unterschiede und Kompromisse sowie Break-Even-Punkte zwischen den Problemformulierungen. Diese Arbeit schlägt basierend auf dieser Grundlage zwei mögliche Kombinationen vor, deren Eigenschaften bewiesen und in Simulation untersucht werden. Außerdem wird die besser abschneidende Variante auf dem humanoiden Roboter Leo implementiert und mit einem nominellen modell-basierten Regler verglichen
    corecore