8,307 research outputs found

    A serious games platform for cognitive rehabilitation with preliminary evaluation

    Get PDF
    In recent years Serious Games have evolved substantially, solving problems in diverse areas. In particular, in Cognitive Rehabilitation, Serious Games assume a relevant role. Traditional cognitive therapies are often considered repetitive and discouraging for patients and Serious Games can be used to create more dynamic rehabilitation processes, holding patients' attention throughout the process and motivating them during their road to recovery. This paper reviews Serious Games and user interfaces in rehabilitation area and details a Serious Games platform for Cognitive Rehabilitation that includes a set of features such as: natural and multimodal user interfaces and social features (competition, collaboration, and handicapping) which can contribute to augment the motivation of patients during the rehabilitation process. The web platform was tested with healthy subjects. Results of this preliminary evaluation show the motivation and the interest of the participants by playing the games.- This work has been supported by FCT - Fundacao para a Ciencia e Tecnologia in the scope of the projects: PEst-UID/CEC/00319/2015 and PEst-UID/CEC/00027/2015. The authors would like to thank also all the volunteers that participated in the study

    Atelier : assistive thechnologies for learning, integration and reabilitation

    Get PDF
    A special needs individual is a broad term used to describe a person with a behavioural or emotional disorder, physical disability or learning disability. Many individuals with special needs are limited in verbal communication, or in many cases non-verbal, making communication and learning a challenging task. Additionally, new forms of communication based on technology aren®t designed for them, making them increasingly isolated in social and educational terms. In spite of this, and fortunately, new forms of interaction do exist and they enable these particular users to access knowledge and provide them with the ability to interact with others, undertaking otherwise impossible. In this project the technology used will not be an end in itself but only a way to “drop” the mouse/keyboard paradigm making use of affordable devices available in the market that could be adopted by people with special needs that are unable to apply the traditional forms of interaction, thus assisting people in their education, integration and rehabilitation activities.info:eu-repo/semantics/publishedVersio

    Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition

    Get PDF
    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation

    Augmenting Sensorimotor Control Using “Goal-Aware” Vibrotactile Stimulation during Reaching and Manipulation Behaviors

    Get PDF
    We describe two sets of experiments that examine the ability of vibrotactile encoding of simple position error and combined object states (calculated from an optimal controller) to enhance performance of reaching and manipulation tasks in healthy human adults. The goal of the first experiment (tracking) was to follow a moving target with a cursor on a computer screen. Visual and/or vibrotactile cues were provided in this experiment, and vibrotactile feedback was redundant with visual feedback in that it did not encode any information above and beyond what was already available via vision. After only 10 minutes of practice using vibrotactile feedback to guide performance, subjects tracked the moving target with response latency and movement accuracy values approaching those observed under visually guided reaching. Unlike previous reports on multisensory enhancement, combining vibrotactile and visual feedback of performance errors conferred neither positive nor negative effects on task performance. In the second experiment (balancing), vibrotactile feedback encoded a corrective motor command as a linear combination of object states (derived from a linear-quadratic regulator implementing a trade-off between kinematic and energetic performance) to teach subjects how to balance a simulated inverted pendulum. Here, the tactile feedback signal differed from visual feedback in that it provided information that was not readily available from visual feedback alone. Immediately after applying this novel “goal-aware” vibrotactile feedback, time to failure was improved by a factor of three. Additionally, the effect of vibrotactile training persisted after the feedback was removed. These results suggest that vibrotactile encoding of appropriate combinations of state information may be an effective form of augmented sensory feedback that can be applied, among other purposes, to compensate for lost or compromised proprioception as commonly observed, for example, in stroke survivors

    Affective Medicine: a review of Affective Computing efforts in Medical Informatics

    Get PDF
    Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as “computing that relates to, arises from, or deliberately influences emotions”. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field

    Automatic Measurement of Affect in Dimensional and Continuous Spaces: Why, What, and How?

    Get PDF
    This paper aims to give a brief overview of the current state-of-the-art in automatic measurement of affect signals in dimensional and continuous spaces (a continuous scale from -1 to +1) by seeking answers to the following questions: i) why has the field shifted towards dimensional and continuous interpretations of affective displays recorded in real-world settings? ii) what are the affect dimensions used, and the affect signals measured? and iii) how has the current automatic measurement technology been developed, and how can we advance the field
    • 

    corecore