1,121 research outputs found
Recommended from our members
Symmetries, groups and groupoids for Systems of Systems
In this paper we propose an algebraic model of systems based on the concept of symmetry that can be instrumental in representing Systems of Systems two main characteristics, namely complexity and (hierarchical) emergence
Reinforcement learning in large state action spaces
Reinforcement learning (RL) is a promising framework for training intelligent agents which learn to optimize long term utility by directly interacting with the environment. Creating RL methods which scale to large state-action spaces is a critical problem towards ensuring real world deployment of RL systems. However, several challenges limit the applicability of RL to large scale settings. These include difficulties with exploration, low sample efficiency, computational intractability, task constraints like decentralization and lack of guarantees about important properties like performance, generalization and robustness in potentially unseen scenarios.
This thesis is motivated towards bridging the aforementioned gap. We propose several principled algorithms and frameworks for studying and addressing the above challenges RL. The proposed methods cover a wide range of RL settings (single and multi-agent systems (MAS) with all the variations in the latter, prediction and control, model-based and model-free methods, value-based and policy-based methods). In this work we propose the first results on several different problems: e.g. tensorization of the Bellman equation which allows exponential sample efficiency gains (Chapter 4), provable suboptimality arising from structural constraints in MAS(Chapter 3), combinatorial generalization results in cooperative MAS(Chapter 5), generalization results on observation shifts(Chapter 7), learning deterministic policies in a probabilistic RL framework(Chapter 6). Our algorithms exhibit provably enhanced performance and sample efficiency along with better scalability. Additionally, we also shed light on generalization aspects of the agents under different frameworks. These properties have been been driven by the use of several advanced tools (e.g. statistical machine learning, state abstraction, variational inference, tensor theory).
In summary, the contributions in this thesis significantly advance progress towards making RL agents ready for large scale, real world applications
Brain-Inspired Computational Intelligence via Predictive Coding
Artificial intelligence (AI) is rapidly becoming one of the key technologies
of this century. The majority of results in AI thus far have been achieved
using deep neural networks trained with the error backpropagation learning
algorithm. However, the ubiquitous adoption of this approach has highlighted
some important limitations such as substantial computational cost, difficulty
in quantifying uncertainty, lack of robustness, unreliability, and biological
implausibility. It is possible that addressing these limitations may require
schemes that are inspired and guided by neuroscience theories. One such theory,
called predictive coding (PC), has shown promising performance in machine
intelligence tasks, exhibiting exciting properties that make it potentially
valuable for the machine learning community: PC can model information
processing in different brain areas, can be used in cognitive control and
robotics, and has a solid mathematical grounding in variational inference,
offering a powerful inversion scheme for a specific class of continuous-state
generative models. With the hope of foregrounding research in this direction,
we survey the literature that has contributed to this perspective, highlighting
the many ways that PC might play a role in the future of machine learning and
computational intelligence at large.Comment: 37 Pages, 9 Figure
Recommended from our members
Variational Methods for Evolution (hybrid meeting)
Variational principles for evolutionary systems take advantage of the rich toolbox provided by the theory of the calculus of variations. Such principles are available for Hamiltonian systems in classical mechanics, gradient flows for dissipative systems, but also time-incremental minimization techniques for more general evolutionary problems. The new challenges arise via the interplay of two or more functionals (e.g. a free energy and a dissipation potential), new structures (systems with nonlocal transport, gradient flows on graphs, kinetic equations, systems of equations)
thus encompassing a large variety of applications in the modeling of materials and fluids, in biology, in multi-agent systems, and in data science.
This workshop brought together a broad spectrum of researchers from
calculus of variations, partial differential equations, metric
geometry, and stochastics, as well as applied and computational
scientists to discuss and exchange ideas. It focused on variational
tools such as minimizing movement schemes,
optimal transport, gradient flows, and large-deviation principles for
time-continuous Markov processes, -convergence and homogenization
- …