3,701 research outputs found
On a Formal Model of Safe and Scalable Self-driving Cars
In recent years, car makers and tech companies have been racing towards self
driving cars. It seems that the main parameter in this race is who will have
the first car on the road. The goal of this paper is to add to the equation two
additional crucial parameters. The first is standardization of safety assurance
--- what are the minimal requirements that every self-driving car must satisfy,
and how can we verify these requirements. The second parameter is scalability
--- engineering solutions that lead to unleashed costs will not scale to
millions of cars, which will push interest in this field into a niche academic
corner, and drive the entire field into a "winter of autonomous driving". In
the first part of the paper we propose a white-box, interpretable, mathematical
model for safety assurance, which we call Responsibility-Sensitive Safety
(RSS). In the second part we describe a design of a system that adheres to our
safety assurance requirements and is scalable to millions of cars
Monitoring and Diagnosability of Perception Systems
Perception is a critical component of high-integrity applications of robotics
and autonomous systems, such as self-driving cars. In these applications,
failure of perception systems may put human life at risk, and a broad adoption
of these technologies relies on the development of methodologies to guarantee
and monitor safe operation as well as detect and mitigate failures. Despite the
paramount importance of perception systems, currently there is no formal
approach for system-level monitoring. In this work, we propose a mathematical
model for runtime monitoring and fault detection of perception systems. Towards
this goal, we draw connections with the literature on self-diagnosability for
multiprocessor systems, and generalize it to (i) account for modules with
heterogeneous outputs, and (ii) add a temporal dimension to the problem, which
is crucial to model realistic perception systems where modules interact over
time. This contribution results in a graph-theoretic approach that, given a
perception system, is able to detect faults at runtime and allows computing an
upper-bound on the number of faulty modules that can be detected. Our second
contribution is to show that the proposed monitoring approach can be elegantly
described with the language of topos theory, which allows formulating
diagnosability over arbitrary time intervals
A Roadmap Towards Resilient Internet of Things for Cyber-Physical Systems
The Internet of Things (IoT) is a ubiquitous system connecting many different
devices - the things - which can be accessed from the distance. The
cyber-physical systems (CPS) monitor and control the things from the distance.
As a result, the concepts of dependability and security get deeply intertwined.
The increasing level of dynamicity, heterogeneity, and complexity adds to the
system's vulnerability, and challenges its ability to react to faults. This
paper summarizes state-of-the-art of existing work on anomaly detection,
fault-tolerance and self-healing, and adds a number of other methods applicable
to achieve resilience in an IoT. We particularly focus on non-intrusive methods
ensuring data integrity in the network. Furthermore, this paper presents the
main challenges in building a resilient IoT for CPS which is crucial in the era
of smart CPS with enhanced connectivity (an excellent example of such a system
is connected autonomous vehicles). It further summarizes our solutions,
work-in-progress and future work to this topic to enable "Trustworthy IoT for
CPS". Finally, this framework is illustrated on a selected use case: A smart
sensor infrastructure in the transport domain.Comment: preprint (2018-10-29
Infrastructure Enabled Autonomy: A Distributed Intelligence Architecture for Autonomous Vehicles
Multiple studies have illustrated the potential for dramatic societal,
environmental and economic benefits from significant penetration of autonomous
driving. However, all the current approaches to autonomous driving require the
automotive manufacturers to shoulder the primary responsibility and liability
associated with replacing human perception and decision making with automation,
potentially slowing the penetration of autonomous vehicles, and consequently
slowing the realization of the societal benefits of autonomous vehicles. We
propose here a new approach to autonomous driving that will re-balance the
responsibility and liabilities associated with autonomous driving between
traditional automotive manufacturers, infrastructure players, and third-party
players. Our proposed distributed intelligence architecture leverages the
significant advancements in connectivity and edge computing in the recent
decades to partition the driving functions between the vehicle, edge computers
on the road side, and specialized third-party computers that reside in the
vehicle. Infrastructure becomes a critical enabler for autonomy. With this
Infrastructure Enabled Autonomy (IEA) concept, the traditional automotive
manufacturers will only need to shoulder responsibility and liability
comparable to what they already do today, and the infrastructure and
third-party players will share the added responsibility and liabilities
associated with autonomous functionalities. We propose a Bayesian Network Model
based framework for assessing the risk benefits of such a distributed
intelligence architecture. An additional benefit of the proposed architecture
is that it enables "autonomy as a service" while still allowing for private
ownership of automobiles.Comment: submitted to the IEEE Intelligent Vehicles Symposium 201
Towards Assume-Guarantee Profiles for Autonomous Vehicles
Rules or specifications for autonomous vehicles are currently formulated on a case-by-case basis, and put together in a rather ad-hoc fashion. As a step towards eliminating this practice, we propose a systematic procedure for generating a set of supervisory specifications for self-driving cars that are 1) associated with a distributed assume-guarantee structure and 2) characterizable by the notion of consistency and completeness. Besides helping autonomous vehicles make better decisions on the road, the assume-guarantee contract structure also helps address the notion of blame when undesirable events occur. We give several game-theoretic examples to demonstrate applicability of our framework
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks
Deep neural networks have become widely used, obtaining remarkable results in
domains such as computer vision, speech recognition, natural language
processing, audio recognition, social network filtering, machine translation,
and bio-informatics, where they have produced results comparable to human
experts. However, these networks can be easily fooled by adversarial
perturbations: minimal changes to correctly-classified inputs, that cause the
network to mis-classify them. This phenomenon represents a concern for both
safety and security, but it is currently unclear how to measure a network's
robustness against such perturbations. Existing techniques are limited to
checking robustness around a few individual input points, providing only very
limited guarantees. We propose a novel approach for automatically identifying
safe regions of the input space, within which the network is robust against
adversarial perturbations. The approach is data-guided, relying on clustering
to identify well-defined geometric regions as candidate safe regions. We then
utilize verification techniques to confirm that these regions are safe or to
provide counter-examples showing that they are not safe. We also introduce the
notion of targeted robustness which, for a given target label and region,
ensures that a NN does not map any input in the region to the target label. We
evaluated our technique on the MNIST dataset and on a neural network
implementation of a controller for the next-generation Airborne Collision
Avoidance System for unmanned aircraft (ACAS Xu). For these networks, our
approach identified multiple regions which were completely safe as well as some
which were only safe for specific labels. It also discovered several
adversarial perturbations of interest
A Right-of-Way Based Strategy to Implement Safe and Efficient Driving at Non-Signalized Intersections for Automated Vehicles
Non-signalized intersection is a typical and common scenario for connected
and automated vehicles (CAVs). How to balance safety and efficiency remains
difficult for researchers. To improve the original Responsibility Sensitive
Safety (RSS) driving strategy on the non-signalized intersection, we propose a
new strategy in this paper, based on right-of-way assignment (RWA). The
performances of RSS strategy, cooperative driving strategy, and RWA based
strategy are tested and compared. Testing results indicate that our strategy
yields better traffic efficiency than RSS strategy, but not satisfying as the
cooperative driving strategy due to the limited range of communication and the
lack of long-term planning. However, our new strategy requires much fewer
communication costs among vehicles.Comment: 6 pages, 7 figure
Teaching AI, Ethics, Law and Policy
The cyberspace and development of intelligent systems using Artificial
Intelligence (AI) creates new challenges to computer professionals, data
scientists, regulators and policy makers. For example, self-driving cars raise
new technical, ethical, legal and public policy issues. This paper proposes a
course named Computers, Ethics, Law, and Public Policy, and suggests a
curriculum for such a course. This paper presents ethical, legal, and public
policy issues relevant to building and using intelligent systems.Comment: 15 page
Generating Comfortable, Safe and Comprehensible Trajectories for Automated Vehicles in Mixed Traffic
While motion planning approaches for automated driving often focus on safety
and mathematical optimality with respect to technical parameters, they barely
consider convenience, perceived safety for the passenger and comprehensibility
for other traffic participants. For automated driving in mixed traffic,
however, this is key to reach public acceptance. In this paper, we revise the
problem statement of motion planning in mixed traffic: Instead of largely
simplifying the motion planning problem to a convex optimization problem, we
keep a more complex probabilistic multi agent model and strive for a near
optimal solution. We assume cooperation of other traffic participants, yet
being aware of violations of this assumption. This approach yields solutions
that are provably safe in all situations, and convenient and comprehensible in
situations that are also unambiguous for humans. Thus, it outperforms existing
approaches in mixed traffic scenarios, as we show in simulation
An architecture for distributed ledger-based M2M auditing for Electric Autonomous Vehicles
Electric Autonomous Vehicles (EAVs) promise to be an effective way to solve
transportation issues such as accidents, emissions and congestion, and aim at
establishing the foundation of Machine-to-Machine (M2M) economy. For this to be
possible, the market should be able to offer appropriate charging services
without involving humans. The state-of-the-art mechanisms of charging and
billing do not meet this requirement, and often impose service fees for value
transactions that may also endanger users and their location privacy. This
paper aims at filling this gap and envisions a new charging architecture and a
billing framework for EAV which would enable M2M transactions via the use of
Distributed Ledger Technology (DLT)
- …