615 research outputs found
Towards efficient on-board deployment of DNNs on intelligent autonomous systems
With their unprecedented performance in major AI tasks, deep neural networks (DNNs) have emerged as a primary building block in modern autonomous systems. Intelligent systems such as drones, mobile robots and driverless cars largely base their perception, planning and application-specific tasks on DNN models. Nevertheless, due to the nature of these applications, such systems require on-board local processing in order to retain their autonomy and meet latency and throughput constraints. In this respect, the large computational and memory demands of DNN workloads pose a significant barrier on their deployment on the resource-and power-constrained compute platforms that are available on-board. This paper presents an overview of recent methods and hardware architectures that address the system-level challenges of modern DNN-enabled autonomous systems at both the algorithmic and hardware design level. Spanning from latency-driven approximate computing techniques to high-throughput mixed-precision cascaded classifiers, the presented set of works paves the way for the on-board deployment of sophisticated DNN models on robots and autonomous systems
Application-aware optimization of Artificial Intelligence for deployment on resource constrained devices
Artificial intelligence (AI) is changing people's everyday life. AI techniques such as Deep Neural Networks (DNN) rely on heavy computational models, which are in principle designed to be executed on powerful HW platforms, such as desktop or server environments. However, the increasing need to apply such solutions in people's everyday life has encouraged the research for methods to allow their deployment on embedded, portable and stand-alone devices, such as mobile phones, which exhibit relatively low memory and computational resources. Such methods targets both the development of lightweight AI algorithms and their acceleration through dedicated HW.
This thesis focuses on the development of lightweight AI solutions, with attention to deep neural networks, to facilitate their deployment on resource constrained devices. Focusing on the computer vision field, we show how putting together the self learning ability of deep neural networks with application-specific knowledge, in the form of feature engineering, it is possible to dramatically reduce the total memory and computational burden, thus allowing the deployment on edge devices. The proposed approach aims to be complementary to already existing application-independent network compression solutions. In this work three main DNN optimization goals have been considered: increasing speed and accuracy, allowing training at the edge, and allowing execution on a microcontroller. For each of these we deployed the resulting algorithm to the target embedded device and measured its performance
Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms
The advent of autonomous vehicles has heralded a transformative era in
transportation, reshaping the landscape of mobility through cutting-edge
technologies. Central to this evolution is the integration of Artificial
Intelligence (AI) and learning algorithms, propelling vehicles into realms of
unprecedented autonomy. This paper provides a comprehensive exploration of the
evolutionary trajectory of AI within autonomous vehicles, tracing the journey
from foundational principles to the most recent advancements. Commencing with a
current landscape overview, the paper delves into the fundamental role of AI in
shaping the autonomous decision-making capabilities of vehicles. It elucidates
the steps involved in the AI-powered development life cycle in vehicles,
addressing ethical considerations and bias in AI-driven software development
for autonomous vehicles. The study presents statistical insights into the usage
and types of AI/learning algorithms over the years, showcasing the evolving
research landscape within the automotive industry. Furthermore, the paper
highlights the pivotal role of parameters in refining algorithms for both
trucks and cars, facilitating vehicles to adapt, learn, and improve performance
over time. It concludes by outlining different levels of autonomy, elucidating
the nuanced usage of AI and learning algorithms, and automating key tasks at
each level. Additionally, the document discusses the variation in software
package sizes across different autonomy levelsComment: 13 page
Virtual Reality via Object Pose Estimation and Active Learning:Realizing Telepresence Robots with Aerial Manipulation Capabilities
This paper presents a novel telepresence system for advancing aerial manipulation indynamic and unstructured environments. The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot’s workspace as well as a haptic guidance to its remotely located operator. To realize this, multiple sensors, namely, a LiDAR, cameras, and IMUs are utilized. For processing of the acquired sensory data, pose estimation pipelines are devised for industrial objects of both known and unknown geometries. We further propose an active learning pipeline in order to increase the sample efficiency of a pipeline component that relies on a Deep Neural Network (DNN) based object detector. All these algorithms jointly address various challenges encountered during the execution of perception tasks in industrial scenarios. In the experiments, exhaustive ablation studies are provided to validate the proposed pipelines. Method-ologically, these results commonly suggest how an awareness of the algorithms’ own failures and uncertainty (“introspection”) can be used to tackle the encountered problems. Moreover, outdoor experiments are conducted to evaluate the effectiveness of the overall system in enhancing aerial manipulation capabilities. In particular, with flight campaigns over days and nights, from spring to winter, and with different users and locations, we demonstrate over 70 robust executions of pick-and-place, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM). As a result, we show the viability of the proposed system in future industrial applications
Smart and Intelligent Automation for Industry 4.0 using Millimeter-Wave and Deep Reinforcement Learning
Innovations in communication systems, compute hardware, and deep learning algorithms have led to the advancement of smart industry automation. Smart automation includes industrial sectors such as intelligent warehouse management, smart infrastructure for first responders, and smart monitoring systems. Automation aims to maximize efficiency, safety, and reliability. Autonomous forklifts can significantly increase productivity, reduce safety-related accidents, and improve operation speed to enhance the efficiency of a warehouse. Forklifts or robotic agents are required to perform different tasks such as position estimation, mapping, and dispatching. Each of the tasks involves different requirements and design constraints. Smart infrastructure for first responder applications requires robotic agents like Unmanned Aerial Vehicles (UAVs) to provide situation awareness surrounding an emergency. An immediate and efficient response to a safety-critical situation is crucial, as a better first response significantly impacts the safety and recovery of parties involved. But these UAVs lack the computational power required to run Deep Neural Networks (DNNs) that are used to provide the necessary intelligence. In this dissertation, we focus on two applications in smart industry automation. In the first part, we target smart warehouse automation for Intelligent Material Handling (IMH), where we design an accurate and robust Machine Learning (ML) based indoor localization system for robotic agents working in a warehouse. The localization system utilizes millimeter-wave (mmWave) wireless sensors to provide feature information in the form of a radio map which the ML model uses to learn indoor positioning. In the second part, we target smart infrastructure for first responders, where we present a computationally efficient adaptive exit strategy in multi-exit Deep Neural Networks using Deep Reinforcement Learning (DRL). The proposed adaptive exit strategy provides faster inference time and significantly reduces computations
Squeezed Edge YOLO: Onboard Object Detection on Edge Devices
Demand for efficient onboard object detection is increasing due to its key
role in autonomous navigation. However, deploying object detection models such
as YOLO on resource constrained edge devices is challenging due to the high
computational requirements of such models. In this paper, an compressed object
detection model named Squeezed Edge YOLO is examined. This model is compressed
and optimized to kilobytes of parameters in order to fit onboard such edge
devices. To evaluate Squeezed Edge YOLO, two use cases - human and shape
detection - are used to show the model accuracy and performance. Moreover, the
model is deployed onboard a GAP8 processor with 8 RISC-V cores and an NVIDIA
Jetson Nano with 4GB of memory. Experimental results show Squeezed Edge YOLO
model size is optimized by a factor of 8x which leads to 76% improvements in
energy efficiency and 3.3x faster throughout.Comment: ML with New Compute Paradigms (MLNCP) Workshop at NeurIPS 202
- …