18 research outputs found
Symmetric Stair Preconditioning of Linear Systems for Parallel Trajectory Optimization
There has been a growing interest in parallel strategies for solving
trajectory optimization problems. One key step in many algorithmic approaches
to trajectory optimization is the solution of moderately-large and sparse
linear systems. Iterative methods are particularly well-suited for parallel
solves of such systems. However, fast and stable convergence of iterative
methods is reliant on the application of a high-quality preconditioner that
reduces the spread and increase the clustering of the eigenvalues of the target
matrix. To improve the performance of these approaches, we present a new
parallel-friendly symmetric stair preconditioner. We prove that our
preconditioner has advantageous theoretical properties when used in conjunction
with iterative methods for trajectory optimization such as a more clustered
eigenvalue spectrum. Numerical experiments with typical trajectory optimization
problems reveal that as compared to the best alternative parallel
preconditioner from the literature, our symmetric stair preconditioner provides
up to a 34% reduction in condition number and up to a 25% reduction in the
number of resulting linear system solver iterations.Comment: Accepted to ICRA 2024, 8 pages, 3 figure
Just Round: Quantized Observation Spaces Enable Memory Efficient Learning of Dynamic Locomotion
Deep reinforcement learning (DRL) is one of the most powerful tools for
synthesizing complex robotic behaviors. But training DRL models is incredibly
compute and memory intensive, requiring large training datasets and replay
buffers to achieve performant results. This poses a challenge for the next
generation of field robots that will need to learn on the edge to adapt to
their environment. In this paper, we begin to address this issue through
observation space quantization. We evaluate our approach using four simulated
robot locomotion tasks and two state-of-the-art DRL algorithms, the on-policy
Proximal Policy Optimization (PPO) and off-policy Soft Actor-Critic (SAC) and
find that observation space quantization reduces overall memory costs by as
much as 4.2x without impacting learning performance.Comment: Accepted to ICRA 202
MPCGPU: Real-Time Nonlinear Model Predictive Control through Preconditioned Conjugate Gradient on the GPU
Nonlinear Model Predictive Control (NMPC) is a state-of-the-art approach for
locomotion and manipulation which leverages trajectory optimization at each
control step. While the performance of this approach is computationally
bounded, implementations of direct trajectory optimization that use iterative
methods to solve the underlying moderately-large and sparse linear systems, are
a natural fit for parallel hardware acceleration. In this work, we introduce
MPCGPU, a GPU-accelerated, real-time NMPC solver that leverages an accelerated
preconditioned conjugate gradient (PCG) linear system solver at its core. We
show that MPCGPU increases the scalability and real-time performance of NMPC,
solving larger problems, at faster rates. In particular, for tracking tasks
using the Kuka IIWA manipulator, MPCGPU is able to scale to kilohertz control
rates with trajectories as long as 512 knot points. This is driven by a custom
PCG solver which outperforms state-of-the-art, CPU-based, linear system solvers
by at least 10x for a majority of solves and 3.6x on average.Comment: Accepted to ICRA 2024, 8 pages, 6 figure
Code Generation for Conic Model-Predictive Control on Microcontrollers with TinyMPC
Conic constraints appear in many important control applications like legged
locomotion, robotic manipulation, and autonomous rocket landing. However,
current solvers for conic optimization problems have relatively heavy
computational demands in terms of both floating-point operations and memory
footprint, making them impractical for use on small embedded devices. We extend
TinyMPC, an open-source, high-speed solver targeting low-power embedded control
applications, to handle second-order cone constraints. We also present
code-generation software to enable deployment of TinyMPC on a variety of
microcontrollers. We benchmark our generated code against state-of-the-art
embedded QP and SOCP solvers, demonstrating a two-order-of-magnitude speed
increase over ECOS while consuming less memory. Finally, we demonstrate
TinyMPC's efficacy on the Crazyflie, a lightweight, resource-constrained
quadrotor with fast dynamics. TinyMPC and its code-generation tools are
publicly available at https://tinympc.org.Comment: Submitted to CDC, 2024. First two authors contributed equall
TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers
Model-predictive control (MPC) is a powerful tool for controlling highly
dynamic robotic systems subject to complex constraints. However, MPC is
computationally demanding, and is often impractical to implement on small,
resource-constrained robotic platforms. We present TinyMPC, a high-speed MPC
solver with a low memory footprint targeting the microcontrollers common on
small robots. Our approach is based on the alternating direction method of
multipliers (ADMM) and leverages the structure of the MPC problem for
efficiency. We demonstrate TinyMPC both by benchmarking against the
state-of-the-art solver OSQP, achieving nearly an order of magnitude speed
increase, as well as through hardware experiments on a 27 g quadrotor,
demonstrating high-speed trajectory tracking and dynamic obstacle avoidance.Comment: First three authors contributed equally and are ordered
alphabeticall
Datasheets for Machine Learning Sensors
Machine learning (ML) sensors offer a new paradigm for sensing that enables
intelligence at the edge while empowering end-users with greater control of
their data. As these ML sensors play a crucial role in the development of
intelligent devices, clear documentation of their specifications,
functionalities, and limitations is pivotal. This paper introduces a standard
datasheet template for ML sensors and discusses its essential components
including: the system's hardware, ML model and dataset attributes, end-to-end
performance metrics, and environmental impact. We provide an example datasheet
for our own ML sensor and discuss each section in detail. We highlight how
these datasheets can facilitate better understanding and utilization of sensor
data in ML applications, and we provide objective measures upon which system
performance can be evaluated and compared. Together, ML sensors and their
datasheets provide greater privacy, security, transparency, explainability,
auditability, and user-friendliness for ML-enabled embedded systems. We
conclude by emphasizing the need for standardization of datasheets across the
broader ML community to ensure the responsible and effective use of sensor
data
Project-based, collaborative, algorithmic robotics for high school students: Programming self-driving race cars at MIT
We describe the pedagogy behind the MIT Beaver Works Summer Institute Robotics Program, a new high-school STEM program in robotics. The program utilizes state-of-the-art sensors and embedded computers for mobile robotics. These components are carried on an exciting 1/10-scale race-car platform. The program has three salient, distinguishing features: (i) it focuses on robotics software systems: the students design and build robotics software towards real-world applications, without being distracted by hardware issues; (ii) it champions project-based learning: the students learn through weekly project assignments and a final course challenge; (iii) the learning is implemented in a collaborative fashion: the students learn the basics of collaboration and technical communication in lectures, and they work in teams to design and implement their software systems. The program was offered as a four-week residential program at MIT in the summer of 2016. In this paper, we provide the details of this new program, its teaching objectives, and its results. We also briefly discuss future directions and opportunities
RobotPerf: An Open-Source, Vendor-Agnostic, Benchmarking Suite for Evaluating Robotics Computing System Performance
We introduce RobotPerf, a vendor-agnostic benchmarking suite designed to
evaluate robotics computing performance across a diverse range of hardware
platforms using ROS 2 as its common baseline. The suite encompasses ROS 2
packages covering the full robotics pipeline and integrates two distinct
benchmarking approaches: black-box testing, which measures performance by
eliminating upper layers and replacing them with a test application, and
grey-box testing, an application-specific measure that observes internal system
states with minimal interference. Our benchmarking framework provides
ready-to-use tools and is easily adaptable for the assessment of custom ROS 2
computational graphs. Drawing from the knowledge of leading robot architects
and system architecture experts, RobotPerf establishes a standardized approach
to robotics benchmarking. As an open-source initiative, RobotPerf remains
committed to evolving with community input to advance the future of
hardware-accelerated robotics
Widening Access to Applied Machine Learning with TinyML
Broadening access to both computational and educational resources is critical
to diffusing machine-learning (ML) innovation. However, today, most ML
resources and experts are siloed in a few countries and organizations. In this
paper, we describe our pedagogical approach to increasing access to applied ML
through a massive open online course (MOOC) on Tiny Machine Learning (TinyML).
We suggest that TinyML, ML on resource-constrained embedded devices, is an
attractive means to widen access because TinyML both leverages low-cost and
globally accessible hardware, and encourages the development of complete,
self-contained applications, from data collection to deployment. To this end, a
collaboration between academia (Harvard University) and industry (Google)
produced a four-part MOOC that provides application-oriented instruction on how
to develop solutions using TinyML. The series is openly available on the edX
MOOC platform, has no prerequisites beyond basic programming, and is designed
for learners from a global variety of backgrounds. It introduces pupils to
real-world applications, ML algorithms, data-set engineering, and the ethical
considerations of these technologies via hands-on programming and deployment of
TinyML applications in both the cloud and their own microcontrollers. To
facilitate continued learning, community building, and collaboration beyond the
courses, we launched a standalone website, a forum, a chat, and an optional
course-project competition. We also released the course materials publicly,
hoping they will inspire the next generation of ML practitioners and educators
and further broaden access to cutting-edge ML technologies.Comment: Understanding the underpinnings of the TinyML edX course series:
https://www.edx.org/professional-certificate/harvardx-tiny-machine-learnin
Widening Access to Applied Machine Learning With TinyML
Broadening access to both computational and educational resources is crit- ical to diffusing machine learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this article, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, applied ML on resource-constrained embedded devices, is an attractive means to widen access because TinyML leverages low-cost and globally accessible hardware and encourages the development of complete, self-contained applications, from data collection to deployment. To this end, a collaboration between academia and industry produced a four part MOOC that provides application-oriented instruction on how to develop solutions using TinyML. The series is openly available on the edX MOOC platform, has no prerequisites beyond basic programming, and is designed for global learners from a variety of backgrounds. It introduces real-world applications, ML algorithms, data-set engineering, and the ethi- cal considerations of these technologies through hands-on programming and deployment of TinyML applications in both the cloud and on their own microcontrollers. To facili- tate continued learning, community building, and collaboration beyond the courses, we launched a standalone website, a forum, a chat, and an optional course-project com- petition. We also open-sourced the course materials, hoping they will inspire the next generation of ML practitioners and educators and further broaden access to cutting-edge ML technologies