147 research outputs found

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    Physiologically Modulating Videogames or Simulations which use Motion-Sensing Input Devices

    Get PDF
    New types of controllers allow players to make inputs to a video game or simulation by moving the entire controller itself. This capability is typically accomplished using a wireless input device having accelerometers, gyroscopes, and an infrared LED tracking camera. The present invention exploits these wireless motion-sensing technologies to modulate the player's movement inputs to the videogame based upon physiological signals. Such biofeedback-modulated video games train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies enhance personal improvement, not just the diversion, of the user

    Safety and Feasibility of a First-Person View, Full-Body Interaction Game for Telerehabilitation Post-Stroke

    Get PDF
    This study explored the feasibility and safety of pairing the Microsoft Kinect® sensor with the Oculus Rift® Head Mounted Display (HMD) as a telerehabilitation technology platform for persons post-stroke. To test initial safety, fourteen participants without disabilities (age 30 ± 8.8 years) engaged in a game-based task using the Microsoft Kinect® with a first-person view using the Oculus Rift®. These tasks were repeated for five participants post-stroke (age 56 ± 3.0 years). No significant adverse events occurred in either study population. When using the Oculus Rift® HMD, three participants without disabilities reported dizziness and nausea. All of the participants post-stroke required hands-on assistance for balance and fall prevention. The intensive nature of physical support necessary for this type of interaction limits the application as a telerehabilitation intervention.  Given the increasing availability of HMDs for commercial use, it is crucial that the safety of immersive games and technologies for telerehabilitation is fully explored

    Experimental Effects of Pre-Drive Arousal on Teenage Simulated Driving Performance in the Presence of a Teenage Passenger

    Get PDF
    Teenage passengers increase teenage driving risk, but this may be conditional on events and emotions immediately preceding driving. An experimental simulation study evaluated the effect of pre-drive arousal on risky driving in the presence of a confederate teenage passenger. In a two-by-two between-subjects design, participants were randomized to high or low pre-drive arousal and passenger present or not present conditions. Prior to the drive participants played the Nintendo Wii video game, Rock BandTM. In the high-arousal condition participants stood while playing high-energy Beatles songs; in the low arousal condition participants sat while playing low-energy Beatles songs. The manipulation produced differences in arousal by group. Group differences in risky driving were in the expected direction, but were not statistically significant at p = .05 on any of the three outcome measures, which included Failed to Stop (failing to stop at signalized intersections in the dilemma zone), Percent Time in Red (in intersections), and Pass Slow Vehicle (electing to pass a slow vehicle)

    Interaction Modalities Used on Serious Games for Upper Limb Rehabilitation: A Systematic Review

    Get PDF
    This systematic review aims to analyze the state-of-the-art regarding interaction modalities used on serious games for upper limb rehabilitation. A systematic search was performed in IEEE Xplore and Web of Science databases. PRISMA and QualSyst protocols were used to filter and assess the articles. Articles must meet the following inclusion criteria: they must be written in English; be at least four pages in length; use or develop serious games; focus on upper limb rehabilitation; and be published between 2007 and 2017. Of 121 articles initially retrieved, 33 articles met the inclusion criteria. Three interaction modalities were found: vision systems (42.4%), complementary vision systems (30.3%), and no-vision systems (27.2%). Vision systems and no-vision systems obtained a similar mean QualSyst (86%) followed by complementary vision systems (85.7%). Almost half of the studies used vision systems as the interaction modality (42.4%) and used the Kinect sensor to collect the body movements (48.48%). The shoulder was the most treated body part in the studies (19%). A key limitation of vision systems and complementary vision systems is that their device performances might be affected by lighting conditions. A main limitation of the no-vision systems is that the range-of-motion in angles of the body movement might not be measured accurately. Due to a limited number of studies, fruitful areas for further research could be the following: serious games focused on finger rehabilitation and trauma injuries, game difficulty adaptation based on user's muscle strength and posture, and multisensor data fusion on interaction modalities

    Human Health Engineering Volume II

    Get PDF
    In this Special Issue on “Human Health Engineering Volume II”, we invited submissions exploring recent contributions to the field of human health engineering, i.e., technology for monitoring the physical or mental health status of individuals in a variety of applications. Contributions could focus on sensors, wearable hardware, algorithms, or integrated monitoring systems. We organized the different papers according to their contributions to the main parts of the monitoring and control engineering scheme applied to human health applications, namely papers focusing on measuring/sensing physiological variables, papers highlighting health-monitoring applications, and examples of control and process management applications for human health. In comparison to biomedical engineering, we envision that the field of human health engineering will also cover applications for healthy humans (e.g., sports, sleep, and stress), and thus not only contribute to the development of technology for curing patients or supporting chronically ill people, but also to more general disease prevention and optimization of human well-being

    Exploração de radar para reconhecimento de gestos

    Get PDF
    Communication disorders have a notable negative impact on people’s lives, leading to isolation, depression and loss of independence. Over the years, many different approaches to attenuate these problems were proposed, although most come with noticeable drawbacks. Lack of versatility, intrusive solutions or the need to carry a device around are some of the problems that these solutions encounter. Radars have seen an increase in use over the past few years and even spreading to different areas such as the automotive and health sectors. This technology is non-intrusive, not sensitive to changes in environmental conditions such as lighting, and does not intrude on the user’s privacy unlike cameras. In this dissertation and in the scope of the APH-ALARM project, the author tests the radar in a gesture recognition context to support communication in the bedroom scenario. In this scenario, the user is someone with communication problems, lying in their bed trying to communicate with a family member inside or outside the house. The use of gestures allows the user to have assistance communicating and helps express their wants or needs. To recognize the gestures executed by the user, it is necessary to capture the movement. To demonstrate the capabilities of the technology, a proof of concept system was implemented, which captures the data, filters and transforms it into images used as input for a gesture classification model. To evaluate the solution, we recorded ten repetitions of five arm gestures executed by four people. A subject independent solution proved to be more challenging when compared to a subject dependent solution, where all datasets but one achieved a median accuracy above 70% with most going over 90%.Os problemas de comunicação têm um efeito nocivo nas vidas das pessoas como isolamento, depressão e perda de independência. Ao longo dos anos, várias abordagens para atenuar estes problemas foram propostas, sendo que a maioria tem desvantagens. Falta de versatilidade, soluções intrusivas ou a necessidade de andar com um dispositivo são alguns dos problemas destas soluções. O uso de radares tem visto um aumento nos últimos anos, chegando até áreas variadas como o setor de saúde ou automóvel. Este tipo de solução é não intrusiva, não é sensível a mudanças das condições ambientais como luz e não invade a privacidade do utilizador como o uso de câmaras. Nesta dissertação e no âmbito do projeto APH-ALARM, testou-se um radar no contexto do reconhecimento de gestos para apoio à comunicação no cenário do quarto. Neste cenário, o utilizador é alguém com problemas de comunicação, que se encontra deitado na sua cama e precisa de comunicar com um familiar dentro ou fora de casa. O uso de gestos permite ao utilizador ter algum apoio durante a comunicação e ajuda o mesmo a expressar as suas necessidades. Para reconhecer os gestos feitos pelo utilizador, é necessário capturar o movimento humano. Para demonstrar as capacidades da tecnologia para este contexto, foi implementada uma prova de conceito de um sistema que captura os dados do radar e de seguida os filtra, converte-os em imagens e usa as mesmas como entrada de um modelo para classificação de gestos. Para avaliar a solução proposta, foram recolhidos dados de quatro pessoas enquanto realizavam dez repetições de cinco gestos diferentes com um dos braços. Uma solução independente do utilizador mostrou ser um caso mais desafiante quando comparada com uma solução dependente do utilizador, em que todos os datasets excepto um tem um acerto médio superior a 70% em que a maioria deles supera os 90%.Mestrado em Engenharia de Computadores e Telemátic
    corecore