7 research outputs found

    Analysis of the interaction between elderly people and a simulated virtual coach

    Get PDF
    The EMPATHIC project develops and validates new interaction paradigms for personalized virtual coaches (VC) to promote healthy and independent aging. To this end, the work presented in this paper is aimed to analyze the interaction between the EMPATHIC-VC and the users. One of the goals of the project is to ensure an end-user driven design, involving senior users from the beginning and during each phase of the project. Thus, the paper focuses on some sessions where the seniors carried out interactions with a Wizard of Oz driven, simulated system. A coaching strategy based on the GROW model was used throughout these sessions so as to guide interactions and engage the elderly with the goals of the project. In this interaction framework, both the human and the system behavior were analyzed. The way the wizard implements the GROW coaching strategy is a key aspect of the system behavior during the interaction. The language used by the virtual agent as well as his or her physical aspect are also important cues that were analyzed. Regarding the user behavior, the vocal communication provides information about the speaker's emotional status, that is closely related to human behavior and which can be extracted from the speech and language analysis. In the same way, the analysis of the facial expression, gazes and gestures can provide information on the non verbal human communication even when the user is not talking. In addition, in order to engage senior users, their preferences and likes had to be considered. To this end, the effect of the VC on the users was gathered by means of direct questionnaires. These analyses have shown a positive and calm behavior of users when interacting with the simulated virtual coach as well as some difficulties of the system to develop the proposed coaching strategy.The research presented in this paper is conducted as part of the project EMPATHIC that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no 769872

    Giving robots a voice: human-in-the-loop voice creation and open-ended labeling

    Get PDF
    Speech is a natural interface for humans to interact with robots. Yet, aligning a robot’s voice to its appearance is challenging due to the rich vocabulary of both modalities. Previous research has explored a few labels to describe robots and tested them on a limited number of robots and existing voices. Here, we develop a robot-voice creation tool followed by large-scale behavioral human experiments (N=2,505). First, participants collectively tune robotic voices to match 175 robot images using an adaptive human-in-the-loop pipeline. Then, participants describe their impression of the robot or their matched voice using another human-in-the-loop paradigm for open-ended labeling. The elicited taxonomy is then used to rate robot attributes and to predict the best voice for an unseen robot. We offer a web interface to aid engineers in customizing robot voices, demonstrating the synergy between cognitive science and machine learning for engineering tools

    SAM Device. Dispositivo IoT para la asistencia a mayores

    Get PDF
    En este trabajo se propone la creación de un dispositivo físico para facilitar un Servicio de Acompañamiento a Mayores (SAM) junto con una plataforma software para la síntesis, reconocimiento y generación de lenguaje natural que permite que dicho dispositivo pueda proporcionar una interfaz basada en voz y en lenguaje natural (VUI, por sus siglas en inglés: Voice User Interface). Este dispositivo lo hemos denominado SAM Device y quedará integrado dentro de una plataforma para el acompañamiento y cuidado de las personas mayores (SAM Project) que el grupo de investigación grupoM. Redes y Middleware de la Universidad de Alicante está desarrollado para los servicios sociales de diferentes ayuntamientos en la cual también colaboré como becaria. Se trata de un proyecto ambicioso que implica el diseño de hardware y de software, basado ampliamente en sistemas distribuidos y técnicas de inteligencia artificial (IA), principalmente en el procesamiento de lenguaje natural (NLP, por sus siglas en inglés: Natural Language Processing) ofrecido como servicios de Internet o servicios en la Nube. El dispositivo físico o SAM Device consiste en un dispositivo inteligente, del tipo Internet de las Cosas (IoT por sus siglas en inglés: Internet of Things), capaz de conectarse a Internet a través de diferentes tipos de redes de comunicación, identificar y atender solicitudes de intervención, reconocer órdenes de voz en lenguaje natural, mantener sencillos diálogos con los mayores y responder igualmente mediante voz y lenguaje natural. Tanto para el diseño del hardware como del software se ha procurado que las propuestas sean abiertas y que puedan estar a disposición de toda la comunidad. El dispositivo debe ser lo suficientemente potente como para prestar el servicio de acompañamiento que se requiere, pero lo suficientemente ajustado como para que pueda ser asequible para cualquier economía. El proyecto también contempla el desarrollo de varios prototipos SAM Device, y el desarrollo y despliegue de la plataforma de IA para el reconocimiento de lenguaje natural en un entorno distribuido bajo un modelo de prestación de servicios. Los ayuntamientos, a través de sus servicios sociales, han seleccionado un grupo de personas mayores para realizar las pruebas en sus propios hogares y realizar un seguimiento mediante su personal sociosanitario. Sin embargo, la actual situación provocada por la pandemia ha retrasado esta deseada fase, por lo que me he visto obligada a desechar este objetivo del ámbito del trabajo. A pesar de esta y de algunas otras situaciones complicadas por la COVID, el proyecto ha podido llegar a su fin, diseñándose un conjunto de experimentos que avalan que tanto el dispositivo SAM como la plataforma NLP se comportan satisfactoriamente, y que se han podido integrar con facilidad en todo el ecosistema creado para el acompañamiento a mayores (SAM Project)

    Analysis and automatic identification of spontaneous emotions in speech from human-human and human-machine communication

    Get PDF
    383 p.This research mainly focuses on improving our understanding of human-human and human-machineinteractions by analysing paricipants¿ emotional status. For this purpose, we have developed andenhanced Speech Emotion Recognition (SER) systems for both interactions in real-life scenarios,explicitly emphasising the Spanish language. In this framework, we have conducted an in-depth analysisof how humans express emotions using speech when communicating with other persons or machines inactual situations. Thus, we have analysed and studied the way in which emotional information isexpressed in a variety of true-to-life environments, which is a crucial aspect for the development of SERsystems. This study aimed to comprehensively understand the challenge we wanted to address:identifying emotional information on speech using machine learning technologies. Neural networks havebeen demonstrated to be adequate tools for identifying events in speech and language. Most of themaimed to make local comparisons between some specific aspects; thus, the experimental conditions weretailored to each particular analysis. The experiments across different articles (from P1 to P19) are hardlycomparable due to our continuous learning of dealing with the difficult task of identifying emotions inspeech. In order to make a fair comparison, additional unpublished results are presented in the Appendix.These experiments were carried out under identical and rigorous conditions. This general comparisonoffers an overview of the advantages and disadvantages of the different methodologies for the automaticrecognition of emotions in speech

    A Sensing Platform to Monitor Sleep Efficiency

    Get PDF
    Sleep plays a fundamental role in the human life. Sleep research is mainly focused on the understanding of the sleep patterns, stages and duration. An accurate sleep monitoring can detect early signs of sleep deprivation and insomnia consequentially implementing mechanisms for preventing and overcoming these problems. Recently, sleep monitoring has been achieved using wearable technologies, able to analyse also the body movements, but old people can encounter some difficulties in using and maintaining these devices. In this paper, we propose an unobtrusive sensing platform able to analyze body movements, infer sleep duration and awakenings occurred along the night, and evaluating the sleep efficiency index. To prove the feasibility of the suggested method we did a pilot trial in which several healthy users have been involved. The sensors were installed within the bed and, on each day, each user was administered with the Groningen Sleep Quality Scale questionnaire to evaluate the user’s perceived sleep quality. Finally, we show potential correlation between a perceived evaluation with an objective index as the sleep efficiency.</p

    Development and evaluation of a novel virtual agent-based app for patients with colorectal cancer: A mixed methods study

    Get PDF
    Background and aim: Information support is an integral part of cancer care, but its provision can be problematic in busy health settings. The aim of this project was to develop and evaluate a health app to facilitate the provision of information support in newly diagnosed patients with colorectal cancer (CRC). Instead of delivering information using text, three animated embodied virtual agents (VAs) were deployed. The VAs were formulated after patients’ treating clinicians (male oncologist, female nurse and female pharmacist) to explore the role of familiarity, which has not been addressed in previous research. Study methods: A multi-stage development process was followed for the app, which was provided to the study participants before the beginning of their treatment. A convergent parallel mixed methods design involving pre- and post-exposure questionnaires (adapted versions of the Toronto Information Needs Questionnaire and the System Usability Scale), app usage data and semi-structured interviews was deployed to evaluate the intervention. Results and discussion: The app was acceptable by the end users and had a good degree of usability (mean System Usability Scale score=73.89). The information content was appropriate and met patients’ demands to a moderate extent; this was because patients utilised other information sources (e.g., printed material) to address their needs. Incorporating supportive functions such as a medicinal calendar in addition to the information content emerged as an important aspect. The inclusion of VAs was deemed to be appropriate. The VAs fostered a sense of presence, added trustworthiness to the information content and were perceived as more interactive than reading text. Having a VA representing a familiar clinician was favoured by most users. The vast majority of patients perceived the VAs as cartoon figures and suggested that they should be improved to look realistic in order to give the impression of having an exchange with a real person. Natural voices were preferred over synthetic speech. Conclusion: VA-based mHealth interventions are an acceptable way of supporting patients with CRC. Appropriate consideration should be given to the requirements of the intended user audience to design acceptable interventions that reflect their needs
    corecore