7 research outputs found
Investigating the Impact of Multi-LiDAR Placement on Object Detection for Autonomous Driving
The past few years have witnessed an increasing interest in improving the
perception performance of LiDARs on autonomous vehicles. While most of the
existing works focus on developing new deep learning algorithms or model
architectures, we study the problem from the physical design perspective, i.e.,
how different placements of multiple LiDARs influence the learning-based
perception. To this end, we introduce an easy-to-compute information-theoretic
surrogate metric to quantitatively and fast evaluate LiDAR placement for 3D
detection of different types of objects. We also present a new data collection,
detection model training and evaluation framework in the realistic CARLA
simulator to evaluate disparate multi-LiDAR configurations. Using several
prevalent placements inspired by the designs of self-driving companies, we show
the correlation between our surrogate metric and object detection performance
of different representative algorithms on KITTI through extensive experiments,
validating the effectiveness of our LiDAR placement evaluation approach. Our
results show that sensor placement is non-negligible in 3D point cloud-based
object detection, which will contribute up to 10% performance discrepancy in
terms of average precision in challenging 3D object detection settings. We
believe that this is one of the first studies to quantitatively investigate the
influence of LiDAR placement on perception performance. The code is available
on https://github.com/HanjiangHu/Multi-LiDAR-Placement-for-3D-Detection.Comment: CVPR 2022 camera-ready version:15 pages, 14 figures, 9 table
Widening Access to Applied Machine Learning with TinyML
Broadening access to both computational and educational resources is critical
to diffusing machine-learning (ML) innovation. However, today, most ML
resources and experts are siloed in a few countries and organizations. In this
paper, we describe our pedagogical approach to increasing access to applied ML
through a massive open online course (MOOC) on Tiny Machine Learning (TinyML).
We suggest that TinyML, ML on resource-constrained embedded devices, is an
attractive means to widen access because TinyML both leverages low-cost and
globally accessible hardware, and encourages the development of complete,
self-contained applications, from data collection to deployment. To this end, a
collaboration between academia (Harvard University) and industry (Google)
produced a four-part MOOC that provides application-oriented instruction on how
to develop solutions using TinyML. The series is openly available on the edX
MOOC platform, has no prerequisites beyond basic programming, and is designed
for learners from a global variety of backgrounds. It introduces pupils to
real-world applications, ML algorithms, data-set engineering, and the ethical
considerations of these technologies via hands-on programming and deployment of
TinyML applications in both the cloud and their own microcontrollers. To
facilitate continued learning, community building, and collaboration beyond the
courses, we launched a standalone website, a forum, a chat, and an optional
course-project competition. We also released the course materials publicly,
hoping they will inspire the next generation of ML practitioners and educators
and further broaden access to cutting-edge ML technologies.Comment: Understanding the underpinnings of the TinyML edX course series:
https://www.edx.org/professional-certificate/harvardx-tiny-machine-learnin
Widening Access to Applied Machine Learning With TinyML
Broadening access to both computational and educational resources is crit- ical to diffusing machine learning (ML) innovation. However, today, most ML resources and experts are siloed in a few countries and organizations. In this article, we describe our pedagogical approach to increasing access to applied ML through a massive open online course (MOOC) on Tiny Machine Learning (TinyML). We suggest that TinyML, applied ML on resource-constrained embedded devices, is an attractive means to widen access because TinyML leverages low-cost and globally accessible hardware and encourages the development of complete, self-contained applications, from data collection to deployment. To this end, a collaboration between academia and industry produced a four part MOOC that provides application-oriented instruction on how to develop solutions using TinyML. The series is openly available on the edX MOOC platform, has no prerequisites beyond basic programming, and is designed for global learners from a variety of backgrounds. It introduces real-world applications, ML algorithms, data-set engineering, and the ethi- cal considerations of these technologies through hands-on programming and deployment of TinyML applications in both the cloud and on their own microcontrollers. To facili- tate continued learning, community building, and collaboration beyond the courses, we launched a standalone website, a forum, a chat, and an optional course-project com- petition. We also open-sourced the course materials, hoping they will inspire the next generation of ML practitioners and educators and further broaden access to cutting-edge ML technologies
Real-Time Detection of Robotic Traffic in Online Advertising
Detecting robotic traffic at scale on online ads needs an approach that is scalable, comprehensive, precise, and can rapidly respond to changing traffic patterns. In this paper we describe SLIDR or SLIce-Level Detection of Robots, a real-time deep neural network model trained with weak supervision to identify invalid clicks on online ads. We ensure fairness across different traffic slices by formulating a convex optimization problem that allows SLIDR to achieve optimal performance on individual traffic slices with a budget on overall false positives. SLIDR has been deployed since 2021 and safeguards advertiser campaigns on Amazon against robots clicking on ads on the e-commerce site. We describe some of the important lessons learned by deploying SLIDR that include guardrails that prevent updates of anomalous models and disaster recovery mechanisms to mitigate or correct decisions made by a faulty model
QuaRL: Quantization for Sustainable Reinforcement Learning
Deep reinforcement learning has achieved significant milestones, however, the
computational demands of reinforcement learning training and inference remain
substantial. Quantization is an effective method to reduce the computational
overheads of neural networks, though in the context of reinforcement learning,
it is unknown whether quantization's computational benefits outweigh the
accuracy costs introduced by the corresponding quantization error. To quantify
this tradeoff we perform a broad study applying quantization to reinforcement
learning. We apply standard quantization techniques such as post-training
quantization (PTQ) and quantization aware training (QAT) to a comprehensive set
of reinforcement learning tasks (Atari, Gym), algorithms (A2C, DDPG, DQN, D4PG,
PPO), and models (MLPs, CNNs) and show that policies may be quantized to 8-bits
without degrading reward, enabling significant inference speedups on
resource-constrained edge devices. Motivated by the effectiveness of standard
quantization techniques on reinforcement learning policies, we introduce a
novel quantization algorithm, \textit{ActorQ}, for quantized actor-learner
distributed reinforcement learning training. By leveraging full precision
optimization on the learner and quantized execution on the actors,
\textit{ActorQ} enables 8-bit inference while maintaining convergence. We
develop a system for quantized reinforcement learning training around
\textit{ActorQ} and demonstrate end to end speedups of 1.5 - 2.5
over full precision training on a range of tasks (Deepmind Control
Suite). Finally, we break down the various runtime costs of distributed
reinforcement learning training (such as communication time, inference time,
model load time, etc) and evaluate the effects of quantization on these system
attributes.Comment: Equal contribution from first three authors. Updating with QuaRL for
sustainable (carbon emissions) RL result