7,345 research outputs found
Distributed stochastic optimization via matrix exponential learning
In this paper, we investigate a distributed learning scheme for a broad class
of stochastic optimization problems and games that arise in signal processing
and wireless communications. The proposed algorithm relies on the method of
matrix exponential learning (MXL) and only requires locally computable gradient
observations that are possibly imperfect and/or obsolete. To analyze it, we
introduce the notion of a stable Nash equilibrium and we show that the
algorithm is globally convergent to such equilibria - or locally convergent
when an equilibrium is only locally stable. We also derive an explicit linear
bound for the algorithm's convergence speed, which remains valid under
measurement errors and uncertainty of arbitrarily high variance. To validate
our theoretical analysis, we test the algorithm in realistic
multi-carrier/multiple-antenna wireless scenarios where several users seek to
maximize their energy efficiency. Our results show that learning allows users
to attain a net increase between 100% and 500% in energy efficiency, even under
very high uncertainty.Comment: 31 pages, 3 figure
Algorithms in nature: the convergence of systems biology and computational thinking
Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high-level design principles of biological systems. This Perspectives discusses the recent convergence of these two ways of thinking
AI of Brain and Cognitive Sciences: From the Perspective of First Principles
Nowadays, we have witnessed the great success of AI in various applications,
including image classification, game playing, protein structure analysis,
language translation, and content generation. Despite these powerful
applications, there are still many tasks in our daily life that are rather
simple to humans but pose great challenges to AI. These include image and
language understanding, few-shot learning, abstract concepts, and low-energy
cost computing. Thus, learning from the brain is still a promising way that can
shed light on the development of next-generation AI. The brain is arguably the
only known intelligent machine in the universe, which is the product of
evolution for animals surviving in the natural environment. At the behavior
level, psychology and cognitive sciences have demonstrated that human and
animal brains can execute very intelligent high-level cognitive functions. At
the structure level, cognitive and computational neurosciences have unveiled
that the brain has extremely complicated but elegant network forms to support
its functions. Over years, people are gathering knowledge about the structure
and functions of the brain, and this process is accelerating recently along
with the initiation of giant brain projects worldwide. Here, we argue that the
general principles of brain functions are the most valuable things to inspire
the development of AI. These general principles are the standard rules of the
brain extracting, representing, manipulating, and retrieving information, and
here we call them the first principles of the brain. This paper collects six
such first principles. They are attractor network, criticality, random network,
sparse coding, relational memory, and perceptual learning. On each topic, we
review its biological background, fundamental property, potential application
to AI, and future development.Comment: 59 pages, 5 figures, review articl
- âŠ