3 research outputs found
End-to-End deep neural network architectures for speed and steering wheel angle prediction in autonomous driving
The complex decision-making systems used for autonomous vehicles or advanced driverassistance systems (ADAS) are being replaced by end-to-end (e2e) architectures based on deep-neuralnetworks (DNN). DNNs can learn complex driving actions from datasets containing thousands of
images and data obtained from the vehicle perception system. This work presents the classification,
design and implementation of six e2e architectures capable of generating the driving actions of speed
and steering wheel angle directly on the vehicle control elements. The work details the design stages
and optimization process of the convolutional networks to develop six e2e architectures. In the
metric analysis the architectures have been tested with different data sources from the vehicle, such
as images, XYZ accelerations and XYZ angular speeds. The best results were obtained with a mixed
data e2e architecture that used front images from the vehicle and angular speeds to predict the speed
and steering wheel angle with a mean error of 1.06%. An exhaustive optimization process of the
convolutional blocks has demonstrated that it is possible to design lightweight e2e architectures with
high performance more suitable for the final implementation in autonomous driving.This work was partially supported by DGT (ref.SPIP2017-02286) and GenoVision
(ref.BFU2017-88300-C2-2-R) Spanish Government projects, and the “Research Programe for Groups
of Scientific Excellence in the Region of Murcia” of the Seneca Foundation (Agency for Science and
Technology in the Region of Murcia—19895/GERM/15)
Probabilistic Hybrid Action Models for Predicting Concurrent Percept-driven Robot Behavior
This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic
causal model for predicting the behavior generated by modern percept-driven
robot plans. PHAMs represent aspects of robot behavior that cannot be
represented by most action models used in AI planning: the temporal structure
of continuous control processes, their non-deterministic effects, several modes
of their interferences, and the achievement of triggering conditions in
closed-loop robot plans.
The main contributions of this article are: (1) PHAMs, a model of concurrent
percept-driven behavior, its formalization, and proofs that the model generates
probably, qualitatively accurate predictions; and (2) a resource-efficient
inference method for PHAMs based on sampling projections from probabilistic
action models and state descriptions. We show how PHAMs can be applied to
planning the course of action of an autonomous robot office courier based on
analytical and experimental results
Plan Projection, Execution, and Learning for Mobile Robot Control
Most state-of-the-art hybrid control systems for mobile robots are decomposed into different layers. While the deliberation layer reasons about the actions required for the robot in order to achieve a given goal, the behavioral layer is designed to enable the robot to quickly react to unforeseen events. This decomposition guarantees a safe operation even in the presence of unforeseen and dynamic obstacles and enables the robot to cope with situations it was not explicitly programmed for. The layered design, however, also leaves us with the problem of plan execution. The problem of plan execution is the problem of arbitrating between the deliberation- and the behavioral layer. Abstract symbolic actions have to be translated into streams of local control commands. Simultaneously, execution failures have to be handled on an appropriate level of abstraction. It is now widely accepted that plan execution should form a third layer of a hybrid robot control system. The resulting layered architectures are called three-tiered architectures, or 3T architectures for short. Although many high level programming frameworks have been proposed to support the implementation of the intermediate layer, there is no generally accepted algorithmic basis for plan execution in three-tiered architectures. In this thesis, we propose to base plan execution on plan projection and learning and present a general framework for the self-supervised improvement of plan execution. This framework has been implemented in APPEAL, an Architecture for Plan Projection, Execution And Learning, which extends the well known RHINO control system by introducing an execution layer. This thesis contributes to the field of plan-based mobile robot control which investigates the interrelation between planning, reasoning, and learning techniques based on an explicit representation of the robot's intended course of action, a plan. In McDermott's terminology, a plan is that part of a robot control program, which the robot cannot only execute, but also reason about and manipulate. According to that broad view, a plan may serve many purposes in a robot control system like reasoning about future behavior, the revision of intended activities, or learning. In this thesis, plan-based control is applied to the self-supervised improvement of mobile robot plan execution