2,420 research outputs found
Diversity-driven selection of exploration strategies in multi-armed bandits
International audienceWe consider a scenario where an agent has multiple available strategies to explore an unknown environment. For each new interaction with the environment, the agent must select which exploration strategy to use. We provide a new strategy-agnostic method that treat the situation as a Multi-Armed Bandits problem where the reward signal is the diversity of effects that each strategy produces. We test the method empirically on a simulated planar robotic arm, and establish that the method is both able discriminate between strategies of dissimilar quality, even when the differences are tenuous, and that the resulting performance is competitive with the best fixed mixture of strategies
Sustainable Cooperative Coevolution with a Multi-Armed Bandit
This paper proposes a self-adaptation mechanism to manage the resources
allocated to the different species comprising a cooperative coevolutionary
algorithm. The proposed approach relies on a dynamic extension to the
well-known multi-armed bandit framework. At each iteration, the dynamic
multi-armed bandit makes a decision on which species to evolve for a
generation, using the history of progress made by the different species to
guide the decisions. We show experimentally, on a benchmark and a real-world
problem, that evolving the different populations at different paces allows not
only to identify solutions more rapidly, but also improves the capacity of
cooperative coevolution to solve more complex problems.Comment: Accepted at GECCO 201
Reinforcement Learning and Bandits for Speech and Language Processing: Tutorial, Review and Outlook
In recent years, reinforcement learning and bandits have transformed a wide
range of real-world applications including healthcare, finance, recommendation
systems, robotics, and last but not least, the speech and natural language
processing. While most speech and language applications of reinforcement
learning algorithms are centered around improving the training of deep neural
networks with its flexible optimization properties, there are still many
grounds to explore to utilize the benefits of reinforcement learning, such as
its reward-driven adaptability, state representations, temporal structures and
generalizability. In this survey, we present an overview of recent advancements
of reinforcement learning and bandits, and discuss how they can be effectively
employed to solve speech and natural language processing problems with models
that are adaptive, interactive and scalable.Comment: To appear in Expert Systems with Applications. Accompanying
INTERSPEECH 2022 Tutorial on the same topic. Including latest advancements in
large language models (LLMs
Autonomous Drug Design with Multi-Armed Bandits
Recent developments in artificial intelligence and automation support a new
drug design paradigm: autonomous drug design. Under this paradigm, generative
models can provide suggestions on thousands of molecules with specific
properties, and automated laboratories can potentially make, test and analyze
molecules with minimal human supervision. However, since still only a limited
number of molecules can be synthesized and tested, an obvious challenge is how
to efficiently select among provided suggestions in a closed-loop system. We
formulate this task as a stochastic multi-armed bandit problem with multiple
plays, volatile arms and similarity information. To solve this task, we adapt
previous work on multi-armed bandits to this setting, and compare our solution
with random sampling, greedy selection and decaying-epsilon-greedy selection
strategies. According to our simulation results, our approach has the potential
to perform better exploration and exploitation of the chemical space for
autonomous drug design
Multi-Armed Bandits for Intelligent Tutoring Systems
We present an approach to Intelligent Tutoring Systems which adaptively
personalizes sequences of learning activities to maximize skills acquired by
students, taking into account the limited time and motivational resources. At a
given point in time, the system proposes to the students the activity which
makes them progress faster. We introduce two algorithms that rely on the
empirical estimation of the learning progress, RiARiT that uses information
about the difficulty of each exercise and ZPDES that uses much less knowledge
about the problem.
The system is based on the combination of three approaches. First, it
leverages recent models of intrinsically motivated learning by transposing them
to active teaching, relying on empirical estimation of learning progress
provided by specific activities to particular students. Second, it uses
state-of-the-art Multi-Arm Bandit (MAB) techniques to efficiently manage the
exploration/exploitation challenge of this optimization process. Third, it
leverages expert knowledge to constrain and bootstrap initial exploration of
the MAB, while requiring only coarse guidance information of the expert and
allowing the system to deal with didactic gaps in its knowledge. The system is
evaluated in a scenario where 7-8 year old schoolchildren learn how to
decompose numbers while manipulating money. Systematic experiments are
presented with simulated students, followed by results of a user study across a
population of 400 school children
Beyond A/B Testing: Sequential Randomization for Developing Interventions in Scaled Digital Learning Environments
Randomized experiments ensure robust causal inference that are critical to
effective learning analytics research and practice. However, traditional
randomized experiments, like A/B tests, are limiting in large scale digital
learning environments. While traditional experiments can accurately compare two
treatment options, they are less able to inform how to adapt interventions to
continually meet learners' diverse needs. In this work, we introduce a trial
design for developing adaptive interventions in scaled digital learning
environments -- the sequential randomized trial (SRT). With the goal of
improving learner experience and developing interventions that benefit all
learners at all times, SRTs inform how to sequence, time, and personalize
interventions. In this paper, we provide an overview of SRTs, and we illustrate
the advantages they hold compared to traditional experiments. We describe a
novel SRT run in a large scale data science MOOC. The trial results
contextualize how learner engagement can be addressed through inclusive
culturally targeted reminder emails. We also provide practical advice for
researchers who aim to run their own SRTs to develop adaptive interventions in
scaled digital learning environments
- …