13,002 research outputs found

    How can exploratory learning with games and simulations within the curriculum be most effectively evaluated?

    Get PDF
    There have been few attempts to introduce frameworks that can help support tutors evaluate educational games and simulations that can be most effective in their particular learning context and subject area. The lack of a dedicated framework has produced a significant impediment for uptake of games and simulations particularly in formal learning contexts. This paper aims to address this shortcoming by introducing a four-dimensional framework for helping tutors to evaluate the potential of using games- and simulation- based learning in their practice, and to support more critical approaches to this form of games and simulations. The four-dimensional framework is applied to two examples from practice to test its efficacy and structure critical reflection upon practice

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences

    Learning to Communicate with Deep Multi-Agent Reinforcement Learning

    Full text link
    We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains

    Gravitational Waves from Wobbling Pulsars

    Full text link
    The prospects for detection of gravitational waves from precessing pulsars have been considered by constructing fully relativistic rotating neutron star models and evaluating the expected wave amplitude hh from a galactic source. For a "typical" neutron matter equation of state and observed rotation rates, it is shown that moderate wobble angles may render an observable signal from a nearby source once the present generation of interferometric antennas becomes operative.Comment: PlainTex, 7 pp. , no figures, IAG/USP Rep. 6
    • …
    corecore