600 research outputs found

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    A Survey of Deep Learning in Sports Applications: Perception, Comprehension, and Decision

    Full text link
    Deep learning has the potential to revolutionize sports performance, with applications ranging from perception and comprehension to decision. This paper presents a comprehensive survey of deep learning in sports performance, focusing on three main aspects: algorithms, datasets and virtual environments, and challenges. Firstly, we discuss the hierarchical structure of deep learning algorithms in sports performance which includes perception, comprehension and decision while comparing their strengths and weaknesses. Secondly, we list widely used existing datasets in sports and highlight their characteristics and limitations. Finally, we summarize current challenges and point out future trends of deep learning in sports. Our survey provides valuable reference material for researchers interested in deep learning in sports applications

    3D Sensing Character Simulation using Game Engine Physics

    Get PDF
    Creating visual 3D sensing characters that interact with AI peers and the virtual envi- ronment can be a difficult task for those with less experience in using learning algorithms or creating visual environments to execute an agent-based simulation. In this thesis, the use of game engines was studied as a tool to create and execute vi- sual simulations with 3D sensing characters, and train game ready bots. The idea was to make use of the game engine’s available tools to create highly visual simulations without requiring much knowledge in modeling or animation, as well as integrating exterior agent simulation libraries to create sensing characters without needing expertise in learning algorithms. These sensing characters, were be 3D humanoid characters that can perform the basic functions of a game character such as moving, jumping, and interacting, but also have simulated different senses in them. The senses that these characters can have include: touch using collision detection, vision using ray casts, directional sound, smell, and other imaginable senses. These senses are obtained using different game develop- ment techniques available in the game engine and can be used as input for the learning algorithm to help the character learn. This allows the simulation of agents using off-the- shelf algorithms and using the game engine’s motor for the visualizations of these agents. We explored the use of these tools to create visual bots for games, and teach them how to play the game until they reach a level where they can serve as adversaries for real-life players in interactive games. This solution was tested using both reinforcement learning and imitation learning algorithms in an attempt to compare how efficient both learning methods can be when used to teach sensing game bots in different game scenarios. These scenarios varied in both objective and environment complexity as well as the number of bots to access how each solution behaves in different scenarios. In this document is presented a related work on the agent simulation and game engine areas, followed by a more detailed solution and its implementation ending with practical tests and its results.Criar visualizações de personagens 3D com sentidos que interagem com colegas de IA e com o ambiente virtual pode ser uma tarefa difícil para programadores com menos experiência no uso de algoritmos de aprendizagem automática ou na criação de ambientes visuais para executar simulações baseadas em agentes. Nesta tese foi estudado o uso de motores de jogos como ferramenta para criar e execu- tar simulações visuais com personagens 3D, e treinar bots para jogos. A ideia foi usar as ferramentas disponíveis do motor de jogos para criar simulações visuais sem exigir muito conhecimento em modelação ou animação, para além de integrar bibliotecas de simulação de agentes externas para criar personagens com sentidos sem precisar de conhecimentos em algoritmos de aprendizagem automática. Estas personagens 3D são humanoides que podem desempenhar as funções básicas de uma personagem de um jogo como mover, saltar e interagir, mas também terão simulados neles diferentes sentidos. Os sentidos que estas personagens podem ter inclui: o tato, colisões, visão, som direcional, olfato e outros sentidos imagináveis. Estes sentidos são obtidos usando diferentes técnicas de desenvol- vimento de jogos disponíveis no motor de jogos, e podem ser usados como inputs para os algoritmos de aprendizagem automática para ajudar as personagens a aprender. Esta solução foi testada usando algoritmos de Reinforcement Learning e Imitation Le- arning, com o intuito de comparar a eficiência de ambos os métodos de aprendizagem quando usados para ensinar bots de jogos em diferentes cenários. Estes cenários variaram em complexidade de objetivo e ambiente, e também no número de bots para que se possa visualizar como cada algoritmo se comporta em diferentes cenários. Neste documento será apresentado um estado da arte nas áreas de simulação de agentes e motores de jogos, seguido de uma proposta de solução mais detalhada para este problema

    Patient-specific simulation for autonomous surgery

    Get PDF
    An Autonomous Robotic Surgical System (ARSS) has to interact with the complex anatomical environment, which is deforming and whose properties are often uncertain. Within this context, an ARSS can benefit from the availability of patient-specific simulation of the anatomy. For example, simulation can provide a safe and controlled environment for the design, test and validation of the autonomous capabilities. Moreover, it can be used to generate large amounts of patient-specific data that can be exploited to learn models and/or tasks. The aim of this Thesis is to investigate the different ways in which simulation can support an ARSS and to propose solutions to favor its employability in robotic surgery. We first address all the phases needed to create such a simulation, from model choice in the pre-operative phase based on the available knowledge to its intra-operative update to compensate for inaccurate parametrization. We propose to rely on deep neural networks trained with synthetic data both to generate a patient-specific model and to design a strategy to update model parametrization starting directly from intra-operative sensor data. Afterwards, we test how simulation can assist the ARSS, both for task learning and during task execution. We show that simulation can be used to efficiently train approaches that require multiple interactions with the environment, compensating for the riskiness to acquire data from real surgical robotic systems. Finally, we propose a modular framework for autonomous surgery that includes deliberative functions to handle real anatomical environments with uncertain parameters. The integration of a personalized simulation proves fundamental both for optimal task planning and to enhance and monitor real execution. The contributions presented in this Thesis have the potential to introduce significant step changes in the development and actual performance of autonomous robotic surgical systems, making them closer to applicability to real clinical conditions
    corecore