1,438 research outputs found

    Paradox Phenomena in Autonomously Self-Adapting Navigation

    Get PDF

    Game Theory Models for the Verification of the Collective Behaviour of Autonomous Cars

    Get PDF
    The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.Comment: In Proceedings FVAV 2017, arXiv:1709.0212

    Solutions to the routing problem: towards trustworthy autonomous vehicles

    Get PDF

    Adaptive tactical behaviour planner for autonomous ground vehicle

    Get PDF
    Success of autonomous vehicle to effectively replace a human driver depends on its ability to plan safe, efficient and usable paths in dynamically evolving traffic scenarios. This challenge gets more difficult when the autonomous vehicle has to drive through scenarios such as intersections that demand interactive behavior for successful navigation. The many autonomous vehicle demonstrations over the last few decades have highlighted the limitations in the current state of the art in path planning solutions. They have been found to result in inefficient and sometime unsafe behaviours when tackling interactively demanding scenarios. In this paper we review the current state of the art of path planning solutions, the individual planners and the associated methods for each planner. We then establish a gap in the path planning solutions by reviewing the methods against the objectives for successful path planning. A new adaptive tactical behaviour planner framework is then proposed to fill this gap. The behaviour planning framework is motivated by how expert human drivers plan their behaviours in interactive scenarios. Individual modules of the behaviour planner is then described with the description how it fits in the overall framework. Finally we discuss how this planner is expected to generate safe and efficient behaviors in complex dynamic traffic scenarios by considering a case of an un-signalised roundabout

    A Research Platform for Autonomous Vehicles Technologies Research in the Insurance Sector

    Get PDF
    This article belongs to the Special Issue Intelligent Transportation SystemsThis work presents a novel platform for autonomous vehicle technologies research for the insurance sector. The platform has been collaboratively developed by the insurance company MAPFRE-CESVIMAP, Universidad Carlos III de Madrid and INSIA of the Universidad Politécnica de Madrid. The high-level architecture and several autonomous vehicle technologies developed using the framework of this collaboration are introduced and described in this work. Computer vision technologies for environment perception, V2X communication capabilities, enhanced localization, human–machine interaction and self awareness are among the technologies which have been developed and tested. Some use cases that validate the technologies presented in the platform are also presented; these use cases include public demonstrations, tests of the technologies and international competitions for self-driving technologies.Research was supported by the Spanish Government through the CICYT projects (TRA2016-78886-C3-1-R and RTI2018-096036-B-C21) and the Comunidad de Madrid through SEGVAUTO-4.0-CM (P2018/EMT-4362) and PEAVAUTO-CM-UC3M

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    Learning Multi-Modal Self-Awareness Models Empowered by Active Inference for Autonomous Vehicles

    Get PDF
    For autonomous agents to coexist with the real world, it is essential to anticipate the dynamics and interactions in their surroundings. Autonomous agents can use models of the human brain to learn about responding to the actions of other participants in the environment and proactively coordinates with the dynamics. Modeling brain learning procedures is challenging for multiple reasons, such as stochasticity, multi-modality, and unobservant intents. A neglected problem has long been understanding and processing environmental perception data from the multisensorial information referring to the cognitive psychology level of the human brain process. The key to solving this problem is to construct a computing model with selective attention and self-learning ability for autonomous driving, which is supposed to possess the mechanism of memorizing, inferring, and experiential updating, enabling it to cope with the changes in an external world. Therefore, a practical self-driving approach should be open to more than just the traditional computing structure of perception, planning, decision-making, and control. It is necessary to explore a probabilistic framework that goes along with human brain attention, reasoning, learning, and decisionmaking mechanism concerning interactive behavior and build an intelligent system inspired by biological intelligence. This thesis presents a multi-modal self-awareness module for autonomous driving systems. The techniques proposed in this research are evaluated on their ability to model proper driving behavior in dynamic environments, which is vital in autonomous driving for both action planning and safe navigation. First, this thesis adapts generative incremental learning to the problem of imitation learning. It extends the imitation learning framework to work in the multi-agent setting where observations gathered from multiple agents are used to inform the training process of a learning agent, which tracks a dynamic target. Since driving has associated rules, the second part of this thesis introduces a method to provide optimal knowledge to the imitation learning agent through an active inference approach. Active inference is the selective information method gathering during prediction to increase a predictive machine learning model’s prediction performance. Finally, to address the inference complexity and solve the exploration-exploitation dilemma in unobserved environments, an exploring action-oriented model is introduced by pulling together imitation learning and active inference methods inspired by the brain learning procedure

    Learning Multi-Modal Self-Awareness Models Empowered by Active Inference for Autonomous Vehicles

    Get PDF
    Mención Internacional en el título de doctorFor autonomous agents to coexist with the real world, it is essential to anticipate the dynamics and interactions in their surroundings. Autonomous agents can use models of the human brain to learn about responding to the actions of other participants in the environment and proactively coordinates with the dynamics. Modeling brain learning procedures is challenging for multiple reasons, such as stochasticity, multi-modality, and unobservant intents. A neglected problem has long been understanding and processing environmental perception data from the multisensorial information referring to the cognitive psychology level of the human brain process. The key to solving this problem is to construct a computing model with selective attention and self-learning ability for autonomous driving, which is supposed to possess the mechanism of memorizing, inferring, and experiential updating, enabling it to cope with the changes in an external world. Therefore, a practical selfdriving approach should be open to more than just the traditional computing structure of perception, planning, decision-making, and control. It is necessary to explore a probabilistic framework that goes along with human brain attention, reasoning, learning, and decisionmaking mechanism concerning interactive behavior and build an intelligent system inspired by biological intelligence. This thesis presents a multi-modal self-awareness module for autonomous driving systems. The techniques proposed in this research are evaluated on their ability to model proper driving behavior in dynamic environments, which is vital in autonomous driving for both action planning and safe navigation. First, this thesis adapts generative incremental learning to the problem of imitation learning. It extends the imitation learning framework to work in the multi-agent setting where observations gathered from multiple agents are used to inform the training process of a learning agent, which tracks a dynamic target. Since driving has associated rules, the second part of this thesis introduces a method to provide optimal knowledge to the imitation learning agent through an active inference approach. Active inference is the selective information method gathering during prediction to increase a predictive machine learning model’s prediction performance. Finally, to address the inference complexity and solve the exploration-exploitation dilemma in unobserved environments, an exploring action-oriented model is introduced by pulling together imitation learning and active inference methods inspired by the brain learning procedure.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Marco Carli.- Secretario: Víctor González Castro.- Vocal: Nicola Conc

    Proceedings of the 9th Conference on Autonomous Robot Systems and Competitions

    Get PDF
    Welcome to ROBOTICA 2009. This is the 9th edition of the conference on Autonomous Robot Systems and Competitions, the third time with IEEE‐Robotics and Automation Society Technical Co‐Sponsorship. Previous editions were held since 2001 in Guimarães, Aveiro, Porto, Lisboa, Coimbra and Algarve. ROBOTICA 2009 is held on the 7th May, 2009, in Castelo Branco , Portugal. ROBOTICA has received 32 paper submissions, from 10 countries, in South America, Asia and Europe. To evaluate each submission, three reviews by paper were performed by the international program committee. 23 papers were published in the proceedings and presented at the conference. Of these, 14 papers were selected for oral presentation and 9 papers were selected for poster presentation. The global acceptance ratio was 72%. After the conference, eighth papers will be published in the Portuguese journal Robótica, and the best student paper will be published in IEEE Multidisciplinary Engineering Education Magazine. Three prizes will be awarded in the conference for: the best conference paper, the best student paper and the best presentation. The last two, sponsored by the IEEE Education Society ‐ Student Activities Committee. We would like to express our thanks to all participants. First of all to the authors, whose quality work is the essence of this conference. Next, to all the members of the international program committee and reviewers, who helped us with their expertise and valuable time. We would also like to deeply thank the invited speaker, Jean Paul Laumond, LAAS‐CNRS France, for their excellent contribution in the field of humanoid robots. Finally, a word of appreciation for the hard work of the secretariat and volunteers. Our deep gratitude goes to the Scientific Organisations that kindly agreed to sponsor the Conference, and made it come true. We look forward to seeing more results of R&D work on Robotics at ROBOTICA 2010, somewhere in Portugal
    corecore