120 research outputs found
Modeling a teacher in a tutorial-like system using Learning Automata
The goal of this paper is to present a novel approach to model the behavior of a Teacher in a Tutorial- like system. In this model, the Teacher is capable of presenting teaching material from a Socratic-type Domain model via multiple-choice questions. Since this knowledge is stored in the Domain model in chapters with different levels of complexity, the Teacher is able to present learning material of varying degrees of difficulty to the Students. In our model, we propose that the Teacher will be able to assist the Students to learn the more difficult material. In order to achieve this, he provides them with hints that are relative to the difficulty of the learning material presented. This enables the Students to cope with the process of handling more complex knowledge, and to be able to learn it appropriately. To our knowledge, the findings of this study are novel to the field of intelligent adaptation using Learning Automata (LA). The novelty lies in the fact that the learning system has a strategy by which it can deal with increasingly more complex/difficult Environments (or domains from which the learning as to be achieved). In our approach, the convergence of the Student models (represented by LA) is driven not only by the response of the Environment (Teacher), but also by the hints that are provided by the latter. Our proposed Teacher model has been tested against different benchmark Environments, and the results of these simulations have demonstrated the salient aspects of our model. The main conclusion is that Normal and Below-Normal learners benefited significantly from the hints provided by the Teacher, while the benefits to (brilliant) Fast learners were marginal. This seems to be in-line with our subjective understanding of the behavior of real-life Students
Designing and comparing multiple portfolios of parameter configurations for online algorithm selection
National Research Foundation (NRF) Singapore under its International Research Centres in Singapore Funding Initiativ
Learning in Networked Interactions: A Replicator Dynamics Approach
Abstract. Many real-world scenarios can be modelled as multi-agent systems, where multiple autonomous decision makers interact in a single environment. The complex and dynamic nature of such interactions pre-vents hand-crafting solutions for all possible scenarios, hence learning is crucial. Studying the dynamics of multi-agent learning is imperative in selecting and tuning the right learning algorithm for the task at hand. So far, analysis of these dynamics has been mainly limited to normal form games, or unstructured populations. However, many multi-agent systems are highly structured, complex networks, with agents only interacting lo-cally. Here, we study the dynamics of such networked interactions, using the well-known replicator dynamics of evolutionary game theory as a model for learning. Different learning algorithms are modelled by alter-ing the replicator equations slightly. In particular, we investigate lenience as an enabler for cooperation. Moreover, we show how well-connected, stubborn agents can influence the learning outcome. Finally, we investi-gate the impact of structural network properties on the learning outcome, as well as the influence of mutation driven by exploration
- …