4 research outputs found

    Physiological Signals Monitoring Assistive Technology in Interaction with Machines to Address Healthy Aging

    Get PDF
    In this paper, development of age-friendly services and settings in interaction with machines that is among the WHO recommended strategies is addressed. In healthy aging, mental wellbeing plays an important role while over 20% of people in the age group of 60 years and above are affected by mental wellbeing issues worldwide. Mental wellbeing problems have an impact on physical health and vice versa and could cause severe illness. Life stressors are among the main contributors for mental wellbeing problems. People in the mentioned age group are more exposed to life stressors specifically during pandemic. Early stress detection and mood swings could potentially help better mental wellbeing that is currently mainly relying on self-reports which is very biased and subjective. Also, traditionally physiological measure of stress quantified by levels of cortisol requires laboratory settings. Therefore, the need for assistive technologies that addresses early detection and awareness of experienced stress, while providing suitable actions is addressed in this paper for the purpose of mental wellbeing issues caused by stress in everyday life without dependence on laboratory settings for the purpose of healthy aging

    On the Linguistic and Computational Requirements for Creating Face-to-Face Multimodal Human-Machine Interaction

    Full text link
    In this study, conversations between humans and avatars are linguistically, organizationally, and structurally analyzed, focusing on what is necessary for creating face-to-face multimodal interfaces for machines. We videorecorded thirty-four human-avatar interactions, performed complete linguistic microanalysis on video excerpts, and marked all the occurrences of multimodal actions and events. Statistical inferences were applied to data, allowing us to comprehend not only how often multimodal actions occur but also how multimodal events are distributed between the speaker (emitter) and the listener (recipient). We also observed the distribution of multimodal occurrences for each modality. The data show evidence that double-loop feedback is established during a face-to-face conversation. This led us to propose that knowledge from Conversation Analysis (CA), cognitive science, and Theory of Mind (ToM), among others, should be incorporated into the ones used for describing human-machine multimodal interactions. Face-to-face interfaces require an additional control layer to the multimodal fusion layer. This layer has to organize the flow of conversation, integrate the social context into the interaction, as well as make plans concerning 'what' and 'how' to progress on the interaction. This higher level is best understood if we incorporate insights from CA and ToM into the interface system

    Toward a Context-Aware Human–Robot Interaction Framework Based on Cognitive Development

    No full text
    corecore