202,461 research outputs found

    Learning Multi-Modal Self-Awareness Models Empowered by Active Inference for Autonomous Vehicles

    Get PDF
    For autonomous agents to coexist with the real world, it is essential to anticipate the dynamics and interactions in their surroundings. Autonomous agents can use models of the human brain to learn about responding to the actions of other participants in the environment and proactively coordinates with the dynamics. Modeling brain learning procedures is challenging for multiple reasons, such as stochasticity, multi-modality, and unobservant intents. A neglected problem has long been understanding and processing environmental perception data from the multisensorial information referring to the cognitive psychology level of the human brain process. The key to solving this problem is to construct a computing model with selective attention and self-learning ability for autonomous driving, which is supposed to possess the mechanism of memorizing, inferring, and experiential updating, enabling it to cope with the changes in an external world. Therefore, a practical self-driving approach should be open to more than just the traditional computing structure of perception, planning, decision-making, and control. It is necessary to explore a probabilistic framework that goes along with human brain attention, reasoning, learning, and decisionmaking mechanism concerning interactive behavior and build an intelligent system inspired by biological intelligence. This thesis presents a multi-modal self-awareness module for autonomous driving systems. The techniques proposed in this research are evaluated on their ability to model proper driving behavior in dynamic environments, which is vital in autonomous driving for both action planning and safe navigation. First, this thesis adapts generative incremental learning to the problem of imitation learning. It extends the imitation learning framework to work in the multi-agent setting where observations gathered from multiple agents are used to inform the training process of a learning agent, which tracks a dynamic target. Since driving has associated rules, the second part of this thesis introduces a method to provide optimal knowledge to the imitation learning agent through an active inference approach. Active inference is the selective information method gathering during prediction to increase a predictive machine learning model’s prediction performance. Finally, to address the inference complexity and solve the exploration-exploitation dilemma in unobserved environments, an exploring action-oriented model is introduced by pulling together imitation learning and active inference methods inspired by the brain learning procedure

    Game Theoretic Decision Making by Actively Learning Human Intentions Applied on Autonomous Driving

    Full text link
    The ability to estimate human intentions and interact with human drivers intelligently is crucial for autonomous vehicles to successfully achieve their objectives. In this paper, we propose a game theoretic planning algorithm that models human opponents with an iterative reasoning framework and estimates human latent cognitive states through probabilistic inference and active learning. By modeling the interaction as a partially observable Markov decision process with adaptive state and action spaces, our algorithm is able to accomplish real-time lane changing tasks in a realistic driving simulator. We compare our algorithm's lane changing performance in dense traffic with a state-of-the-art autonomous lane changing algorithm to show the advantage of iterative reasoning and active learning in terms of avoiding overly conservative behaviors and achieving the driving objective successfully

    An integrated approach of learning, planning, and execution

    Get PDF
    Agents (hardware or software) that act autonomously in an environment have to be able to integrate three basic behaviors: planning, execution, and learning. This integration is mandatory when the agent has no knowledge about how its actions can affect the environment, how the environment reacts to its actions, or, when the agent does not receive as an explicit input, the goals it must achieve. Without an a priori theory, autonomous agents should be able to self-propose goals, set-up plans for achieving the goals according to previously learned models of the agent and the environment, and learn those models from past experiences of successful and failed executions of plans. Planning involves selecting a goal to reach and computing a set of actions that will allow the autonomous agent to achieve the goal. Execution deals with the interaction with the environment by application of planned actions, observation of resulting perceptions, and control of successful achievement of the goals. Learning is needed to predict the reactions of the environment to the agent actions, thus guiding the agent to achieve its goals more efficiently. In this context, most of the learning systems applied to problem solving have been used to learn control knowledge for guiding the search for a plan, but few systems have focused on the acquisition of planning operator descriptions. As an example, currently, one of the most used techniques for the integration of (a way of) planning, execution, and learning is reinforcement learning. However, they usually do not consider the representation of action descriptions, so they cannot reason in terms of goals and ways of achieving those goals. In this paper, we present an integrated architecture, lope, that learns operator definitions, plans using those operators, and executes the plans for modifying the acquired operators. The resulting system is domain-independent, and we have performed experiments in a robotic framework. The results clearly show that the integrated planning, learning, and executing system outperforms the basic planner in that domain.Publicad

    Learning Multi-Modal Self-Awareness Models Empowered by Active Inference for Autonomous Vehicles

    Get PDF
    Mención Internacional en el título de doctorFor autonomous agents to coexist with the real world, it is essential to anticipate the dynamics and interactions in their surroundings. Autonomous agents can use models of the human brain to learn about responding to the actions of other participants in the environment and proactively coordinates with the dynamics. Modeling brain learning procedures is challenging for multiple reasons, such as stochasticity, multi-modality, and unobservant intents. A neglected problem has long been understanding and processing environmental perception data from the multisensorial information referring to the cognitive psychology level of the human brain process. The key to solving this problem is to construct a computing model with selective attention and self-learning ability for autonomous driving, which is supposed to possess the mechanism of memorizing, inferring, and experiential updating, enabling it to cope with the changes in an external world. Therefore, a practical selfdriving approach should be open to more than just the traditional computing structure of perception, planning, decision-making, and control. It is necessary to explore a probabilistic framework that goes along with human brain attention, reasoning, learning, and decisionmaking mechanism concerning interactive behavior and build an intelligent system inspired by biological intelligence. This thesis presents a multi-modal self-awareness module for autonomous driving systems. The techniques proposed in this research are evaluated on their ability to model proper driving behavior in dynamic environments, which is vital in autonomous driving for both action planning and safe navigation. First, this thesis adapts generative incremental learning to the problem of imitation learning. It extends the imitation learning framework to work in the multi-agent setting where observations gathered from multiple agents are used to inform the training process of a learning agent, which tracks a dynamic target. Since driving has associated rules, the second part of this thesis introduces a method to provide optimal knowledge to the imitation learning agent through an active inference approach. Active inference is the selective information method gathering during prediction to increase a predictive machine learning model’s prediction performance. Finally, to address the inference complexity and solve the exploration-exploitation dilemma in unobserved environments, an exploring action-oriented model is introduced by pulling together imitation learning and active inference methods inspired by the brain learning procedure.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Marco Carli.- Secretario: Víctor González Castro.- Vocal: Nicola Conc

    Learning Action Models as Reactive Behaviors

    Get PDF
    Abstract Autonomous vehicles will require both projective planning and reactive components in order to perform robustly. Projective components are needed for long-term planning and replanning where explicit reasoning about future states is required. Reactive components allow the system to always have some action available in real-time, and themselves can exhibit robust behavior, but lack the ability to explicitly reason about future states over a long time period. This work addresses the problem of learning reactive components (normative action models) for autonomous vehicles from simulation models. Two main thrusts of our current work are described here. First, we wish to show that behaviors learned from simulation are useful in the actual physical system operating in the real world. Second, in order to scale the technique, we demonstrate how behaviors can be built up by first learning lower level behaviors, and then fixing these to use as base cornportents of higher-level behaviors

    DiMSam: Diffusion Models as Samplers for Task and Motion Planning under Partial Observability

    Full text link
    Task and Motion Planning (TAMP) approaches are effective at planning long-horizon autonomous robot manipulation. However, because they require a planning model, it can be difficult to apply them to domains where the environment and its dynamics are not fully known. We propose to overcome these limitations by leveraging deep generative modeling, specifically diffusion models, to learn constraints and samplers that capture these difficult-to-engineer aspects of the planning model. These learned samplers are composed and combined within a TAMP solver in order to find action parameter values jointly that satisfy the constraints along a plan. To tractably make predictions for unseen objects in the environment, we define these samplers on low-dimensional learned latent embeddings of changing object state. We evaluate our approach in an articulated object manipulation domain and show how the combination of classical TAMP, generative learning, and latent embeddings enables long-horizon constraint-based reasoning

    Sampling-Based Nonlinear MPC of Neural Network Dynamics with Application to Autonomous Vehicle Motion Planning

    Full text link
    Control of machine learning models has emerged as an important paradigm for a broad range of robotics applications. In this paper, we present a sampling-based nonlinear model predictive control (NMPC) approach for control of neural network dynamics. We show its design in two parts: 1) formulating conventional optimization-based NMPC as a Bayesian state estimation problem, and 2) using particle filtering/smoothing to achieve the estimation. Through a principled sampling-based implementation, this approach can potentially make effective searches in the control action space for optimal control and also facilitate computation toward overcoming the challenges caused by neural network dynamics. We apply the proposed NMPC approach to motion planning for autonomous vehicles. The specific problem considers nonlinear unknown vehicle dynamics modeled as neural networks as well as dynamic on-road driving scenarios. The approach shows significant effectiveness in successful motion planning in case studies.Comment: To appear in 2022 American Control Conference (ACC
    • …
    corecore