1,448 research outputs found

    Motor learning and transfer between real and virtual environments in young people with autism spectrum disorder: a prospective randomized cross over controlled trial

    Get PDF
    Autism spectrum disorder (ASD) is associated with persistent deficits in social communication and social interaction, including impaired multisensory integration which might negatively impact cognitive and motor skill performance, and hence negatively affect learning of tasks. Considering that tasks in virtual environment may provide an engaging tool as adjuncts to conventional therapies, we set out to compare motor performance between young people with ASD and a typically developing (TD) control group that underwent coincident timing tasks based on Kinect (no physical contact) and on Keyboard (with physical contact) environments. Using a randomized repeated cross-over controlled trial design, fifty young people with ASD and fifty with TD, matched by age and sex were divided into subgroups of 25 people that performed the two first phases of the study (acquisition and retention) on the same device – real or virtual – and then switched to the other device to repeat acquisition and retention phases and finally switched on to a touch screen (transfer phase). Results showed that practice in the virtual task was more difficult (producing more errors), but led to a better performance in the subsequent practice in the real task, with more pronounced improvement in the ASD as compared to the TD group. It can be concluded that the ASD group managed to transfer the practice from a virtual to a real environment, indicating that virtual methods may enhance learning of motor and cognitive skills. A need for further exploration of its effect across a number of tasks and activities is warranted.

    Measuring attention using Microsoft Kinect

    Get PDF
    The transfer of knowledge between individuals has increasingly become achieved with the aid of interfaces or computerized training applications. However, computer based training currently lacks the ability to monitor human behavioral changes and respond to them accordingly. This study examines the ability to predict user attention using features of body posture and head pose. Predictive abilities are assessed by an analysis of the relationship between the measured posture features and common objective measures of attention, such as reaction time and reaction time variance. Subjects were asked to participate in a series of sustained attention tasks while aspects of body movement and positioning were recorded using a Microsoft Kinect. Results showed support for identifiable patterns of behavior associated with attention while also suggesting the complex inter-relationship of measured features and susceptibility of these features to environmental conditions

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the user’s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer

    Multimodality in Online Education: A Comparative Study

    Full text link
    The commencement of the decade brought along with it a grave pandemic and in response the movement of education forums predominantly into the online world. With a surge in the usage of online video conferencing platforms and tools to better gauge student understanding, there needs to be a mechanism to assess whether instructors can grasp the extent to which students understand the subject and their response to the educational stimuli. The current systems consider only a single cue with a lack of focus in the educational domain. Thus, there is a necessity for the measurement of an all-encompassing holistic overview of the students' reaction to the subject matter. This paper highlights the need for a multimodal approach to affect recognition and its deployment in the online classroom while considering four cues, posture and gesture, facial, eye tracking and verbal recognition. It compares the various machine learning models available for each cue and provides the most suitable approach given the available dataset and parameters of classroom footage. A multimodal approach derived from weighted majority voting is proposed by combining the most fitting models from this analysis of individual cues based on accuracy, ease of procuring data corpus, sensitivity and any major drawbacks

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the user’s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer

    interactive Islamic Prayer (iIP)

    Get PDF
    The implementation of Virtual Environments has often been used within the educational domain. This study adopts a Virtual Environment (VE) setting to enhance and develop the physical aspects of teaching the Islamic prayer to primary school children, in comparison to traditional forms of teaching through a prayer book and prayer video. An interactive teaching Software, the interactive Islamic Prayer (iIP), was designed and developed for this purpose and uses technology by Microsoft’s Microsoft Kinect 360 for Windows to demonstrate the various movements of the prayer in sequence. Through the administration of a number of questionnaires, a quantitative analysis of the participants’ learning experience were identified, as well as details over which approach the participants preferred. The questionnaires also provided a detailed insight into six areas of study from the learners’ perspective when using the various learning approaches: comprehension, learning experience, interaction, satisfaction, usability and achievement. The results revealed a higher degree of interaction within the lesson on prayer when using the iIP compared to the traditional teaching methods, and although some were unfamiliar with using the Microsoft Kinect 360, on the whole, they found it to be fun and educational. The findings also showed that the software was able to focus on lower level thinking skills, such as recalling information and memory, as a test of the students’ knowledge on the prayer before and after using the software showed a significant improvement in comparison to the other approaches. Recommendations have been given on how to effectively implement this software within these relevant classrooms

    A Framework for Students Profile Detection

    Get PDF
    Some of the biggest problems tackling Higher Education Institutions are students’ drop-out and academic disengagement. Physical or psychological disabilities, social-economic or academic marginalization, and emotional and affective problems, are some of the factors that can lead to it. This problematic is worsened by the shortage of educational resources, that can bridge the communication gap between the faculty staff and the affective needs of these students. This dissertation focus in the development of a framework, capable of collecting analytic data, from an array of emotions, affects and behaviours, acquired either by human observations, like a teacher in a classroom or a psychologist, or by electronic sensors and automatic analysis software, such as eye tracking devices, emotion detection through facial expression recognition software, automatic gait and posture detection, and others. The framework establishes the guidance to compile the gathered data in an ontology, to enable the extraction of patterns outliers via machine learning, which assist the profiling of students in critical situations, like disengagement, attention deficit, drop-out, and other sociological issues. Consequently, it is possible to set real-time alerts when these profiles conditions are detected, so that appropriate experts could verify the situation and employ effective procedures. The goal is that, by providing insightful real-time cognitive data and facilitating the profiling of the students’ problems, a faster personalized response to help the student is enabled, allowing academic performance improvements
    • 

    corecore