9 research outputs found

    Multimodal system for public speaking with real time feedback: a positive computing perspective

    Get PDF
    A multimodal system for public speaking with real time feedback has been developed using the Microsoft Kinect. The system has been developed within the paradigm of positive computing which focuses on designing for user wellbeing. The system detects body pose, facial expressions and voice. Visual feedback is displayed to users on their speaking performance in real time. Users can view statistics on their utilisation of speaking modalities. The system also has a mentor avatar which appears alongside the user avatar to facilitate user training. Autocue mode allows a user to practice with set text from a chosen speech

    Practising public speaking: user responses to using a mirror versus a multimodal positive computing system

    Get PDF
    A multimodal Positive Computing system with real-time feedback for public speaking has been developed. The system uses the Microsoft Kinect to detect voice, body pose, facial expressions and gestures. The system is a real-time system, which gives users feedback on their performance while they are rehearsing a speech. In this study, we wished to compare this system with a traditional method for practising speaking, namely using a mirror. Ten participants practised a speech for sixty seconds using the system and using the mirror. They completed surveys on their experience after each practice session. Data about their performance was recorded while they were speaking. We found that participants found the system less stressful to use than using the mirror. Participants also reported that they were more motivated to use the system in future. We also found that the system made speakers more aware of their body pose, gaze direction and voice

    Should I trust you? Learning and memory of social interactions in dementia

    Get PDF
    Social relevance has an enhancing effect on learning and subsequent memory retrieval. The ability to learn from and remember social interactions may impact on susceptibility to financial exploitation, which is elevated in individuals with dementia. The current study aimed to investigate learning and memory of social interactions, the relationship between performance and financial vulnerability and the neural substrates underpinning performance in 14 Alzheimer's disease (AD) and 20 behavioural-variant frontotemporal dementia (bvFTD) patients and 20 age-matched healthy controls. On a “trust game” task, participants invested virtual money with counterparts who acted either in a trustworthy or untrustworthy manner over repeated interactions. A non-social “lottery” condition was also included. Participants’ learning of trust/distrust responses and subsequent memory for the counterparts and nature of the interactions was assessed. Carer-rated profiles of financial vulnerability were also collected. Relative to controls, both patient groups showed attenuated learning of trust/distrust responses, and lower overall memory for social interactions. Despite poor learning performance, both AD and bvFTD patients showed better memory of social compared to non-social interactions. Importantly, better memory for social interactions was associated with lower financial vulnerability in AD, but not bvFTD. Learning and memory of social interactions was associated with medial temporal and temporoparietal atrophy in AD, whereas a wider network of frontostriatal, insular, fusiform and medial temporal regions was implicated in bvFTD. Our findings suggest that although social relevance influences memory to an extent in both AD and bvFTD, this is associated with vulnerability to financial exploitation in AD only, and is underpinned by changes to different neural substrates. Theoretically, these findings provide novel insights into potential mechanisms that give rise to vulnerability in people with dementia, and open avenues for possible interventions

    INTERACT 2015 Adjunct Proceedings. 15th IFIP TC.13 International Conference on Human-Computer Interaction 14-18 September 2015, Bamberg, Germany

    Get PDF
    INTERACT is among the world’s top conferences in Human-Computer Interaction. Starting with the first INTERACT conference in 1990, this conference series has been organised under the aegis of the Technical Committee 13 on Human-Computer Interaction of the UNESCO International Federation for Information Processing (IFIP). This committee aims at developing the science and technology of the interaction between humans and computing devices. The 15th IFIP TC.13 International Conference on Human-Computer Interaction - INTERACT 2015 took place from 14 to 18 September 2015 in Bamberg, Germany. The theme of INTERACT 2015 was "Connection.Tradition.Innovation". This volume presents the Adjunct Proceedings - it contains the position papers for the students of the Doctoral Consortium as well as the position papers of the participants of the various workshops

    A multimodal positive computing system for public speaking

    No full text
    A fear of public speaking can have a significant impact on an individual’s success in enterprise and education. This thesis presents a new, multimodal, digital system which enables users to practise their public speaking skills and gives them visual feedback in real time. It has been developed within the paradigm of Positive Computing, an interdisciplinary paradigm for human-computer interaction. Positive Computing has a number of themes including competence, self-awareness, stress-reduction and autonomy. The term ‘multimodal’ refers to the fact that the system detects multiple speaking modes in the speaker such as their gestures, voice and eye contact. The user can select if they want to receive feedback on all speaking modes or a subset of them. The system consists of a Microsoft Kinect 1 connected to a laptop. The Microsoft Kinect, a 3D depth camera, is used to sense the user’s body movements, facial expressions and voice. The user stands in front of the system and speaks. Users can choose to see themselves represented on the screen as either an avatar or live video. Real-time feedback is superimposed on their chosen representation in proximity to the area it relates to. The purpose of the feedback is to make the user aware of their speaking behaviour. The feedback is non-directive as the user can choose if they want to modify their behaviour. This system gives users the potential to develop skill and confidence before speaking in front of a live human audience. Users have reported that they enjoy using the system and can see its benefits. Results and descriptions of all user testing are reported in detail in this thesis

    Effectiveness of Virtual Reality Playback in Public Speaking Training

    Get PDF
    ICMI ’20 Companion, October 25–29, 2020, Virtual event, NetherlandsIn this paper, factors with positive effects in the playback of virtual reality (VR) presentation in training are discussed. To date, the effectiveness of VR public speaking training in both anxiety reduction and skills improvement has been reported. Though the playback using videotape is an effective way in original public speaking training, very few researchers focused on the effectiveness and possibility of VR playback. In this research, A VR playback system for public speaking training is proposed, and a pilot experiment is carried out, so as to figure out the effects of the virtual agent, immersion and public speaking anxiety level in VR playback
    corecore