7,221 research outputs found

    How can I produce a digital video artefact to facilitate greater understanding among youth workers of their own learning-to-learn competence?

    Get PDF
    In Ireland, youth work is delivered largely in marginalised communities and through non-formal and informal learning methods. Youth workers operate in small isolated organisations without many of the resources and structures to improve practice that is afforded to larger formal educational establishments. Fundamental to youth work practice is the ability to identify and construct learning experiences for young people in non-traditional learning environments. It is therefore necessary for youth workers to develop a clear understanding of their own learning capacity in order to facilitate learning experiences for young people. In the course of this research, I attempted to use technology to enhance and support the awareness among youth workers of their own learning capacity by creating a digital video artifact that explores the concept – learning-to-learn. This study presents my understanding of the learning-to-learn competence as, I sought to improve my practice as a youth service manager and youth work trainer. This study was conducted using an action research approach. I designed and evaluated the digital media artifact – “Lenny’s Quest” in collaboration with staff and trainer colleagues in the course of two cycles of action research, and my research was critiqued and validated throughout this process

    Deep learning based approaches for imitation learning.

    Get PDF
    Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable

    Mimicking human player strategies in fighting games using game artificial intelligence techniques

    Get PDF
    Fighting videogames (also known as fighting games) are ever growing in popularity and accessibility. The isolated console experiences of 20th century gaming has been replaced by online gaming services that allow gamers to play from almost anywhere in the world with one another. This gives rise to competitive gaming on a global scale enabling them to experience fresh play styles and challenges by playing someone new. Fighting games can typically be played either as a single player experience, or against another human player, whether it is via a network or a traditional multiplayer experience. However, there are two issues with these approaches. First, the single player offering in many fighting games is regarded as being simplistic in design, making the moves by the computer predictable. Secondly, while playing against other human players can be more varied and challenging, this may not always be achievable due to the logistics involved in setting up such a bout. Game Artificial Intelligence could provide a solution to both of these issues, allowing a human player s strategy to be learned and then mimicked by the AI fighter. In this thesis, game AI techniques have been researched to provide a means of mimicking human player strategies in strategic fighting games with multiple parameters. Various techniques and their current usages are surveyed, informing the design of two separate solutions to this problem. The first solution relies solely on leveraging k nearest neighbour classification to identify which move should be executed based on the in-game parameters, resulting in decisions being made at the operational level and being fed from the bottom-up to the strategic level. The second solution utilises a number of existing Artificial Intelligence techniques, including data driven finite state machines, hierarchical clustering and k nearest neighbour classification, in an architecture that makes decisions at the strategic level and feeds them from the top-down to the operational level, resulting in the execution of moves. This design is underpinned by a novel algorithm to aid the mimicking process, which is used to identify patterns and strategies within data collated during bouts between two human players. Both solutions are evaluated quantitatively and qualitatively. A conclusion summarising the findings, as well as future work, is provided. The conclusions highlight the fact that both solutions are proficient in mimicking human strategies, but each has its own strengths depending on the type of strategy played out by the human. More structured, methodical strategies are better mimicked by the data driven finite state machine hybrid architecture, whereas the k nearest neighbour approach is better suited to tactical approaches, or even random button bashing that does not always conform to a pre-defined strategy

    Execution and perception of effector-specific movement deceptions

    Get PDF
    As a topic that touches on many aspects of movement execution and perception in sports, research on deception has attracted much attention during the last ten years. However, some important questions still remain unresolved—especially what are the kinematic characteristics of more effector-specific movement deceptions that influence an observer’s perceptual recognizability? It is still not known how spatiotemporal dissimilarities between movements and/or response time distributions influence this recognizability. Three different studies were conducted to answer these questions. To embed the new findings into an applied context, a first study investigated the speed of internal processing in domain-specific and unspecific RT tasks. As well as examining speed, results also showed that motor expertise facilitated the processing of domain-specific responses. The second study examined the kinematic characteristics of effector- specific movement deceptions. This showed that expertise in performing those deceptions, as a potential kind of movement mimicry, depends mainly on keeping dissimilarities to non-deceptive movements small. A third, psychophysical study investigated the role of spatiotemporal dissimilarity and response time distribution in the perceptual recognizability of deceptive movements. Results demonstrated that recognizability increases as a function of dissimilarity; however, perceptual performance decreases in the case of early responses. To sum up, the findings presented in this dissertation contribute to a deeper understanding of how the execution and perception of effector-specific movement deceptions are linked together. On the performer side, they demonstrate that experienced athletes are able to mimic non-deceptive movements while performing effector-specific deceptions. However, this attempt becomes a challenge the closer the execution of the movement phase is to the visibility of the action outcome. On the observer side, they show that the perceptual discriminability between movements increases as a function of spatiotemporal dissimilarity. However, observers more frequently tend to produce a prediction error when giving an early response, thus, indicating the efficiency of the performed effector-specific movement deceptions.BewegungstĂ€uschungen spielen in verschiedensten Interaktion im Sport eine besondere Rolle. Das Forschungsinteresse zu diesem Thema hat in den letzten Jahren zunehmend an Bedeutung gewonnen. Dennoch bleiben bislang einige Fragen ungeklĂ€rt, insbesondere, welchen kinematischen Besonderheiten Effektor-spezifische BewegungstĂ€uschungen unterliegen und welche dieser Parameter die perzeptuelle Erkennungsleistung beeinflussen. Noch ist zum Beispiel nicht bekannt, welchen Einfluss raum-zeitliche Unterschiede zwischen den Bewegungen und/oder die Verteilung von Reaktionszeiten auf die Erkennungsleistung haben. Zur Beantwortung dieser Fragen wurden im Rahmen dieser Arbeit drei Studien durchgefĂŒhrt. Um die neu gewonnenen Befunde besser in einen Anwendungskontext einzubetten, untersuchte eine erste Studie die Geschwindigkeit interner Verarbeitungsprozesse wĂ€hrend domĂ€nenspezifischen und unspezifischen RT-Aufgaben. Zudem zeigten die Ergebnisse, dass motorische Expertise zu einer schnelleren Verarbeitung domĂ€nenspezifischer Reaktionen beitrĂ€gt. Die zweite Studie im Rahmen dieser Dissertation untersuchte die kinematischen Eigenschaften von Effektor-spezifischen BewegungstĂ€uschungen. Es konnte gezeigt werden, dass die AusfĂŒhrung Effektor-spezifischer TĂ€uschungen, als eine Art “Bewegungs-Mimikry”, insbesondere eine möglichst prĂ€zise Anpassung der rĂ€umlichen Parameter an nicht getĂ€uschte Bewegungen erfordert. Eine dritte, psychophysische Studie untersuchte im Folgenden die Rolle von raum-zeitlichen Unterschieden sowie die Verteilung von Reaktionszeiten auf die perzeptuelle Erkennungsleistung getĂ€uschter Bewegungen. Die Resultate zeigten, dass die Erkennungsleistung mit einer Zunahme an raum- zeitlichen Unterschieden linear ansteigt. Die Ergebnisse der vorliegenden Dissertation tragen erheblich zum tieferen VerstĂ€ndnis der AusfĂŒhrung und Wahrnehmung von Effektor-spezifischen BewegungstĂ€uschungen bei. Auf der Seite der BewegungsausfĂŒhrung konnte gezeigt werden, dass erfahrene Athleten bei der AusfĂŒhrung von TĂ€uschungen in der Lage sind nicht getĂ€uschte Bewegungen zu imitieren. Dennoch scheint dieser Ansatz immer schwieriger zu werden, je weiter sich die Bewegung der Sichtbarkeit des Handlungseffektes nĂ€hert. Auf der Seite des Beobachters wurde deutlich, dass sich perzeptuelle Diskriminanzleistungen mit dem Anstieg an raum-zeitlichen Unterschieden zwischen den beobachteten Bewegungen verstĂ€rkt. Allerdings zeigte sich auch, dass Beobachter hĂ€ufiger zu Vorhersagefehlern tendierten, wenn frĂŒhe EinschĂ€tzungen abgegeben wurden. Dies spricht im Gegenzug fĂŒr die EffektivitĂ€t der ausgefĂŒhrten Effektor-spezifischen BewegungstĂ€uschungen

    Improving Companion AI Behavior in MimicA

    Get PDF
    Companion characters are an important aspect of video games and appear in many different genres. Their role is typically to support the player as they progress through the game by helping to complete tasks or assisting in combat. However, oftentimes, these companion characters are limited in their ability to dynamically react to new situations and fail to properly assist the player. In this paper, we present a solution by improving upon the MimicA framework, which allows companion characters to emulate the human player. The framework takes a learn by observation approach by storing the game state when the player performs an action. This is then used by machine learning classifiers to determine what action to take and where it should be done. Because the framework makes little assumptions about the rules of the game and focuses on a single session experience, it is flexible enough to apply to a variety of different games and requires no prior training data. We have further improved the original MimicA framework by adding feature selection, n-gram analysis, an improved feedback system, random forest classifier, and a new system for picking a location for actions. In addition, we refactored and updated the original framework to make it easier to use for game developers and the game, Lord of Towers, which was used as a proof of concept. Further, we create another game, Lord of Caves, to demonstrate the flexibility of the new version of the framework. We validated our work using automated simulations and a user study. In our automated simulations, we found random forest was a consistently strong performer. Our user study found that the our implementation of n-grams was successful and 19 of 26 believed our framework would be useful to a game developer

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI

    Get PDF
    During the learning process, a child develops a mental representation of the task he or she is learning. A Machine Learning algorithm develops also a latent representation of the task it learns. We investigate the development of the knowledge construction of an artificial agent through the analysis of its behavior, i.e., its sequences of moves while learning to perform the Tower of Hanoï (TOH) task. The TOH is a well-known task in experimental contexts to study the problem-solving processes and one of the fundamental processes of children’s knowledge construction about their world. We position ourselves in the field of explainable reinforcement learning for developmental robotics, at the crossroads of cognitive modeling and explainable AI. Our main contribution proposes a 3-step methodology named Implicit Knowledge Extraction with eXplainable Artificial Intelligence (IKE-XAI) to extract the implicit knowledge, in form of an automaton, encoded by an artificial agent during its learning. We showcase this technique to solve and explain the TOH task when researchers have only access to moves that represent observational behavior as in human–machine interaction. Therefore, to extract the agent acquired knowledge at different stages of its training, our approach combines: first, a Q-learning agent that learns to perform the TOH task; second, a trained recurrent neural network that encodes an implicit representation of the TOH task; and third, an XAI process using a post-hoc implicit rule extraction algorithm to extract finite state automata. We propose using graph representations as visual and explicit explanations of the behavior of the Q-learning agent. Our experiments show that the IKEXAI approach helps understanding the development of the Q-learning agent behavior by providing a global explanation of its knowledge evolution during learning. IKE-XAI also allows researchers to identify the agent’s Aha! moment by determining from what moment the knowledge representation stabilizes and the agent no longer learns.Region BretagneEuropean Union via the FEDER programSpanish Government Juan de la Cierva Incorporacion - MCIN/AEI IJC2019-039152-IGoogle Research Scholar Gran
    • 

    corecore