56,515 research outputs found
Best practices for deploying digital games for personal empowerment and social inclusion
Digital games are being increasingly used in initiatives to promote personal empowerment and social inclusion (PESI) of disadvantaged groups through learning and participation. There is a lack of knowledge regarding best practices, however. The literature on game-based learning insufficiently addresses the process and context of game-based practice and the diversity of contexts and intermediaries involved in PESI work. This paper takes an important step in addressing this knowledge gap using literature review, case studies, and expert consultation. Based on our findings, we formulate a set of best practices for different stakeholders who wish to set up a project using digital games for PESI. The seven cases in point are projects that represent various application domains of empowerment and inclusion. Case studies were conducted using documentation and interviews, covering background and business case, game format/technology, user groups, usage context, and impact assessment. They provide insight into each case’s strengths and weaknesses, allowing a meta-analysis of the important features and challenges of using digital games for PESI. This analysis was extended and validated through discussion at two expert workshops. Our study shows that a substantial challenge lies in selecting or designing a digital game that strikes a balance between enjoyment, learning and usability for the given use context. The particular needs of the target group and those that help implement the digital game require a highly specific approach. Projects benefit from letting both intermediaries and target groups contribute to the game design and use context. Furthermore, there is a need for multi-dimensional support to facilitate the use and development of game-based practice. Integrating game use in the operation of formal and informal intermediary support organiszations increases the chances at reaching, teaching and empowering those at risk of exclusion. The teachers, caregivers and counsellors involved in the implementation of a game-based approach, in turn can be helped through documentation and training, in combination with structural support
Thinking Fast and Slow with Deep Learning and Tree Search
Sequential decision making problems, such as structured prediction, robotic
control, and game playing, require a combination of planning policies and
generalisation of those plans. In this paper, we present Expert Iteration
(ExIt), a novel reinforcement learning algorithm which decomposes the problem
into separate planning and generalisation tasks. Planning new policies is
performed by tree search, while a deep neural network generalises those plans.
Subsequently, tree search is improved by using the neural network policy to
guide search, increasing the strength of new plans. In contrast, standard deep
Reinforcement Learning algorithms rely on a neural network not only to
generalise plans, but to discover them too. We show that ExIt outperforms
REINFORCE for training a neural network to play the board game Hex, and our
final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most
recent Olympiad Champion player to be publicly released.Comment: v1 to v2: - Add a value function in MCTS - Some MCTS hyper-parameters
changed - Repetition of experiments: improved accuracy and errors shown.
(note the reduction in effect size for the tpt/cat experiment) - Results from
a longer training run, including changes in expert strength in training -
Comparison to MoHex. v3: clarify independence of ExIt and AG0. v4: see
appendix
Recommended from our members
Expertise in chess
This chapter provides an overview of research into chess expertise. After an historical background and a brief description of the game and the rating system, it discusses the information processes enabling players to choose good moves, and in particular the trade-offs between knowledge and search. Other topics include blindfold chess, talent, and the role of deliberate practice and tournament experience
Learning in Repeated Games: Human Versus Machine
While Artificial Intelligence has successfully outperformed humans in complex
combinatorial games (such as chess and checkers), humans have retained their
supremacy in social interactions that require intuition and adaptation, such as
cooperation and coordination games. Despite significant advances in learning
algorithms, most algorithms adapt at times scales which are not relevant for
interactions with humans, and therefore the advances in AI on this front have
remained of a more theoretical nature. This has also hindered the experimental
evaluation of how these algorithms perform against humans, as the length of
experiments needed to evaluate them is beyond what humans are reasonably
expected to endure (max 100 repetitions). This scenario is rapidly changing, as
recent algorithms are able to converge to their functional regimes in shorter
time-scales. Additionally, this shift opens up possibilities for experimental
investigation: where do humans stand compared with these new algorithms? We
evaluate humans experimentally against a representative element of these
fast-converging algorithms. Our results indicate that the performance of at
least one of these algorithms is comparable to, and even exceeds, the
performance of people
- …