71,489 research outputs found
Learning Multi-Modal Self-Awareness Models Empowered by Active Inference for Autonomous Vehicles
For autonomous agents to coexist with the real world, it is essential to anticipate the dynamics and interactions in their surroundings. Autonomous agents can use models of the human
brain to learn about responding to the actions of other participants in the environment and proactively coordinates with the dynamics. Modeling brain learning procedures is challenging for multiple reasons, such as stochasticity, multi-modality, and unobservant intents. A neglected problem has long been understanding and processing environmental
perception data from the multisensorial information referring to the cognitive psychology level of the human brain process. The key to solving this problem is to construct a computing model with selective attention and self-learning ability for autonomous driving, which is
supposed to possess the mechanism of memorizing, inferring, and experiential updating, enabling it to cope with the changes in an external world. Therefore, a practical self-driving approach should be open to more than just the traditional computing structure of perception, planning, decision-making, and control. It is necessary to explore a probabilistic
framework that goes along with human brain attention, reasoning, learning, and decisionmaking mechanism concerning interactive behavior and build an intelligent system inspired by biological intelligence.
This thesis presents a multi-modal self-awareness module for autonomous driving systems. The techniques proposed in this research are evaluated on their ability to model proper driving behavior in dynamic environments, which is vital in autonomous driving for both action
planning and safe navigation. First, this thesis adapts generative incremental learning to the problem of imitation learning. It extends the imitation learning framework to work in the multi-agent setting where observations gathered from multiple agents are used to
inform the training process of a learning agent, which tracks a dynamic target. Since driving has associated rules, the second part of this thesis introduces a method to provide optimal knowledge to the imitation learning agent through an active inference approach.
Active inference is the selective information method gathering during prediction to increase a predictive machine learning model’s prediction performance. Finally, to address the inference complexity and solve the exploration-exploitation dilemma in unobserved environments, an exploring action-oriented model is introduced by pulling together imitation learning and active inference methods inspired by the brain learning procedure
Learning Multi-Modal Self-Awareness Models Empowered by Active Inference for Autonomous Vehicles
MenciĂłn Internacional en el tĂtulo de doctorFor autonomous agents to coexist with the real world, it is essential to anticipate the dynamics
and interactions in their surroundings. Autonomous agents can use models of the human
brain to learn about responding to the actions of other participants in the environment
and proactively coordinates with the dynamics. Modeling brain learning procedures is
challenging for multiple reasons, such as stochasticity, multi-modality, and unobservant
intents. A neglected problem has long been understanding and processing environmental
perception data from the multisensorial information referring to the cognitive psychology
level of the human brain process. The key to solving this problem is to construct a computing
model with selective attention and self-learning ability for autonomous driving, which is
supposed to possess the mechanism of memorizing, inferring, and experiential updating,
enabling it to cope with the changes in an external world. Therefore, a practical selfdriving
approach should be open to more than just the traditional computing structure of
perception, planning, decision-making, and control. It is necessary to explore a probabilistic
framework that goes along with human brain attention, reasoning, learning, and decisionmaking
mechanism concerning interactive behavior and build an intelligent system inspired
by biological intelligence.
This thesis presents a multi-modal self-awareness module for autonomous driving systems.
The techniques proposed in this research are evaluated on their ability to model proper driving
behavior in dynamic environments, which is vital in autonomous driving for both action
planning and safe navigation. First, this thesis adapts generative incremental learning to
the problem of imitation learning. It extends the imitation learning framework to work
in the multi-agent setting where observations gathered from multiple agents are used to
inform the training process of a learning agent, which tracks a dynamic target. Since
driving has associated rules, the second part of this thesis introduces a method to provide
optimal knowledge to the imitation learning agent through an active inference approach.
Active inference is the selective information method gathering during prediction to increase a
predictive machine learning model’s prediction performance. Finally, to address the inference
complexity and solve the exploration-exploitation dilemma in unobserved environments, an exploring action-oriented model is introduced by pulling together imitation learning and
active inference methods inspired by the brain learning procedure.Programa de Doctorado en IngenierĂa ElĂ©ctrica, ElectrĂłnica y Automática por la Universidad Carlos III de MadridPresidente: Marco Carli.- Secretario: VĂctor González Castro.- Vocal: Nicola Conc
The Challenge of Believability in Video Games: Definitions, Agents Models and Imitation Learning
In this paper, we address the problem of creating believable agents (virtual
characters) in video games. We consider only one meaning of believability,
``giving the feeling of being controlled by a player'', and outline the problem
of its evaluation. We present several models for agents in games which can
produce believable behaviours, both from industry and research. For high level
of believability, learning and especially imitation learning seems to be the
way to go. We make a quick overview of different approaches to make video
games' agents learn from players. To conclude we propose a two-step method to
develop new models for believable agents. First we must find the criteria for
believability for our application and define an evaluation method. Then the
model and the learning algorithm can be designed
- …