432 research outputs found
FATROP : A Fast Constrained Optimal Control Problem Solver for Robot Trajectory Optimization and Control
Trajectory optimization is a powerful tool for robot motion planning and
control. State-of-the-art general-purpose nonlinear programming solvers are
versatile, handle constraints in an effective way and provide a high numerical
robustness, but they are slow because they do not fully exploit the optimal
control problem structure at hand. Existing structure-exploiting solvers are
fast but they often lack techniques to deal with nonlinearity or rely on
penalty methods to enforce (equality or inequality) path constraints. This
works presents FATROP: a trajectory optimization solver that is fast and
benefits from the salient features of general-purpose nonlinear optimization
solvers. The speed-up is mainly achieved through the use of a specialized
linear solver, based on a Riccati recursion that is generalized to also support
stagewise equality constraints. To demonstrate the algorithm's potential, it is
benchmarked on a set of robot problems that are challenging from a numerical
perspective, including problems with a minimum-time objective and no-collision
constraints. The solver is shown to solve problems for trajectory generation of
a quadrotor, a robot manipulator and a truck-trailer problem in a few tens of
milliseconds. The algorithm's C++-code implementation accompanies this work as
open source software, released under the GNU Lesser General Public License
(LGPL). This software framework may encourage and enable the robotics community
to use trajectory optimization in more challenging applications
Tight Collision Probability for UAV Motion Planning in Uncertain Environment
Operating unmanned aerial vehicles (UAVs) in complex environments that
feature dynamic obstacles and external disturbances poses significant
challenges, primarily due to the inherent uncertainty in such scenarios.
Additionally, inaccurate robot localization and modeling errors further
exacerbate these challenges. Recent research on UAV motion planning in static
environments has been unable to cope with the rapidly changing surroundings,
resulting in trajectories that may not be feasible. Moreover, previous
approaches that have addressed dynamic obstacles or external disturbances in
isolation are insufficient to handle the complexities of such environments.
This paper proposes a reliable motion planning framework for UAVs, integrating
various uncertainties into a chance constraint that characterizes the
uncertainty in a probabilistic manner. The chance constraint provides a
probabilistic safety certificate by calculating the collision probability
between the robot's Gaussian-distributed forward reachable set and states of
obstacles. To reduce the conservatism of the planned trajectory, we propose a
tight upper bound of the collision probability and evaluate it both exactly and
approximately. The approximated solution is used to generate motion primitives
as a reference trajectory, while the exact solution is leveraged to iteratively
optimize the trajectory for better results. Our method is thoroughly tested in
simulation and real-world experiments, verifying its reliability and
effectiveness in uncertain environments.Comment: Paper Accepted by IROS 202
Recommended from our members
Game-Theoretic Safety Assurance for Human-Centered Robotic Systems
In order for autonomous systems like robots, drones, and self-driving cars to be reliably introduced into our society, they must have the ability to actively account for safety during their operation. While safety analysis has traditionally been conducted offline for controlled environments like cages on factory floors, the much higher complexity of open, human-populated spaces like our homes, cities, and roads makes it unviable to rely on common design-time assumptions, since these may be violated once the system is deployed. Instead, the next generation of robotic technologies will need to reason about safety online, constructing high-confidence assurances informed by ongoing observations of the environment and other agents, in spite of models of them being necessarily fallible.This dissertation aims to lay down the necessary foundations to enable autonomous systems to ensure their own safety in complex, changing, and uncertain environments, by explicitly reasoning about the gap between their models and the real world. It first introduces a suite of novel robust optimal control formulations and algorithmic tools that permit tractable safety analysis in time-varying, multi-agent systems, as well as safe real-time robotic navigation in partially unknown environments; these approaches are demonstrated on large-scale unmanned air traffic simulation and physical quadrotor platforms. After this, it draws on Bayesian machine learning methods to translate model-based guarantees into high-confidence assurances, monitoring the reliability of predictive models in light of changing evidence about the physical system and surrounding agents. This principle is first applied to a general safety framework allowing the use of learning-based control (e.g. reinforcement learning) for safety-critical robotic systems such as drones, and then combined with insights from cognitive science and dynamic game theory to enable safe human-centered navigation and interaction; these techniques are showcased on physical quadrotors—flying in unmodeled wind and among human pedestrians—and simulated highway driving. The dissertation ends with a discussion of challenges and opportunities ahead, including the bridging of safety analysis and reinforcement learning and the need to ``close the loop'' around learning and adaptation in order to deploy increasingly advanced autonomous systems with confidence
Augmented Lagrangian Methods as Layered Control Architectures
For optimal control problems that involve planning and following a
trajectory, two degree of freedom (2DOF) controllers are a ubiquitously used
control architecture that decomposes the problem into a trajectory generation
layer and a feedback control layer. However, despite the broad use and
practical success of this layered control architecture, it remains a design
choice that must be imposed on the control policy. To address this
gap, this paper seeks to initiate a principled study of the design of layered
control architectures, with an initial focus on the 2DOF controller. We show
that applying the Alternating Direction Method of Multipliers (ADMM) algorithm
to solve a strategically rewritten optimal control problem results in solutions
that are naturally layered, and composed of a trajectory generation layer and a
feedback control layer. Furthermore, these layers are coupled via Lagrange
multipliers that ensure dynamic feasibility of the planned trajectory. We
instantiate this framework in the context of deterministic and stochastic
linear optimal control problems, and show how our approach automatically yields
a feedforward/feedback-based control policy that exactly solves the original
problem. We then show that the simplicity of the resulting controller structure
suggests natural heuristic algorithms for approximately solving nonlinear
optimal control problems. We empirically demonstrate improved performance of
these layered nonlinear optimal controllers as compared to iLQR, and highlight
their flexibility by incorporating both convex and nonconvex constraints
- …