3 research outputs found
Flow: A Modular Learning Framework for Autonomy in Traffic
The rapid development of autonomous vehicles (AVs) holds vast potential for
transportation systems through improved safety, efficiency, and access to
mobility. However, due to numerous technical, political, and human factors
challenges, new methodologies are needed to design vehicles and transportation
systems for these positive outcomes. This article tackles technical challenges
arising from the partial adoption of autonomy: partial control, partial
observation, complex multi-vehicle interactions, and the sheer variety of
traffic settings represented by real-world networks. The article presents a
modular learning framework which leverages deep Reinforcement Learning methods
to address complex traffic dynamics. Modules are composed to capture common
traffic phenomena (traffic jams, lane changing, intersections). Learned control
laws are found to exceed human driving performance by at least 40% with only
5-10% adoption of AVs. In partially-observed single-lane traffic, a small
neural network control law can eliminate stop-and-go traffic -- surpassing all
known model-based controllers, achieving near-optimal performance, and
generalizing to out-of-distribution traffic densities.Comment: 14 pages, 8 figures; new experiments and analysi
Guardians of the Deep Fog: Failure-Resilient DNN Inference from Edge to Cloud
Partitioning and distributing deep neural networks (DNNs) over physical nodes
such as edge, fog, or cloud nodes, could enhance sensor fusion, and reduce
bandwidth and inference latency. However, when a DNN is distributed over
physical nodes, failure of the physical nodes causes the failure of the DNN
units that are placed on these nodes. The performance of the inference task
will be unpredictable, and most likely, poor, if the distributed DNN is not
specifically designed and properly trained for failures. Motivated by this, we
introduce deepFogGuard, a DNN architecture augmentation scheme for making the
distributed DNN inference task failure-resilient. To articulate deepFogGuard,
we introduce the elements and a model for the resiliency of distributed DNN
inference. Inspired by the concept of residual connections in DNNs, we
introduce skip hyperconnections in distributed DNNs, which are the basis of
deepFogGuard's design to provide resiliency. Next, our extensive experiments
using two existing datasets for the sensing and vision applications confirm the
ability of deepFogGuard to provide resiliency for distributed DNNs in
edge-cloud networks.Comment: Accepted to ACM AIChallengeIoT 201
ResiliNet: Failure-Resilient Inference in Distributed Neural Networks
Federated Learning aims to train distributed deep models without sharing the
raw data with the centralized server. Similarly, in distributed inference of
neural networks, by partitioning the network and distributing it across several
physical nodes, activations and gradients are exchanged between physical nodes,
rather than raw data. Nevertheless, when a neural network is partitioned and
distributed among physical nodes, failure of physical nodes causes the failure
of the neural units that are placed on those nodes, which results in a
significant performance drop. Current approaches focus on resiliency of
training in distributed neural networks. However, resiliency of inference in
distributed neural networks is less explored. We introduce ResiliNet, a scheme
for making inference in distributed neural networks resilient to physical node
failures. ResiliNet combines two concepts to provide resiliency: skip
hyperconnection, a concept for skipping nodes in distributed neural networks
similar to skip connection in resnets, and a novel technique called failout,
which is introduced in this paper. Failout simulates physical node failure
conditions during training using dropout, and is specifically designed to
improve the resiliency of distributed neural networks. The results of the
experiments and ablation studies using three datasets confirm the ability of
ResiliNet to provide inference resiliency for distributed neural networks.Comment: Accepted in FL-ICML 2020 (International Workshop on Federated
Learning for User Privacy and Data Confidentiality in Conjunction with ICML
2020). Added FAQ to the end of the pape