325 research outputs found

    Emotional advisor to help children with autism in social communication

    Get PDF
    The deficit or impairment in the ability to rationalize emotional states is known as mind-blindness. This condition is seen to be the key inhibitor of social and emotional intelligence for autistic people. Autism is a spectrum of neuro-developmental conditions which affects one’s social functioning, communication and is often accompanied with repetitive behaviours and obsessive interests. Inabilities resulting from mind-blindness include gauging the interest of other parties during conversations, withdrawal from social contact, oblivion to social cues, in difference to people’s opinions and incomprehensible non-verbal communication. The existing assistive devices and tools mostly serve as remedial tools that provide a learning environment for autistic children to learn about the norms of social behaviour. However, these tools lack the capability to operate in conjunction with real-world situations. An idea is proposed that aims to fulfill this need. We propose a portable device which is able to assist autistic people in communication in real-life situations. We believe that this portable device can help to narrow the gap between us and the world of autism through assisted communication. In this paper, we present one part of this device which is called Emotional Advisor to assist autistic children in engaging in meaningful conversations where people are able to ascertain how they are feeling during communication

    On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

    Get PDF
    Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces

    Modeling the user state for context-aware spoken interaction in ambient assisted living

    Get PDF
    Ambient Assisted Living (AAL) systems must provide adapted services easily accessible by a wide variety of users. This can only be possible if the communication between the user and the system is carried out through an interface that is simple, rapid, effective, and robust. Natural language interfaces such as dialog systems fulfill these requisites, as they are based on a spoken conversation that resembles human communication. In this paper, we enhance systems interacting in AAL domains by means of incorporating context-aware conversational agents that consider the external context of the interaction and predict the user's state. The user's state is built on the basis of their emotional state and intention, and it is recognized by means of a module conceived as an intermediate phase between natural language understanding and dialog management in the architecture of the conversational agent. This prediction, carried out for each user turn in the dialog, makes it possible to adapt the system dynamically to the user's needs. We have evaluated our proposal developing a context-aware system adapted to patients suffering from chronic pulmonary diseases, and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, as well as the perceived quality.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02- 02, CAM CONTEXTS (S2009/TIC-1485

    The role of speech technology in biometrics, forensics and man-machine interface

    Get PDF
    Day by day Optimism is growing that in the near future our society will witness the Man-Machine Interface (MMI) using voice technology. Computer manufacturers are building voice recognition sub-systems in their new product lines. Although, speech technology based MMI technique is widely used before, needs to gather and apply the deep knowledge of spoken language and performance during the electronic machine-based interaction. Biometric recognition refers to a system that is able to identify individuals based on their own behavior and biological characteristics. Fingerprint success in forensic science and law enforcement applications with growing concerns relating to border control, banking access fraud, machine access control and IT security, there has been great interest in the use of fingerprints and other biological symptoms for the automatic recognition. It is not surprising to see that the application of biometric systems is playing an important role in all areas of our society. Biometric applications include access to smartphone security, mobile payment, the international border, national citizen register and reserve facilities. The use of MMI by speech technology, which includes automated speech/speaker recognition and natural language processing, has the significant impact on all existing businesses based on personal computer applications. With the help of powerful and affordable microprocessors and artificial intelligence algorithms, the human being can talk to the machine to drive and control all computer-based applications. Today's applications show a small preview of a rich future for MMI based on voice technology, which will ultimately replace the keyboard and mouse with the microphone for easy access and make the machine more intelligent

    A mobile virtual character with emotion-aware strategies for human-robot interaction

    Get PDF
    Emotions may play an important role in human-robot interaction, especially with social robots. Although the emotion recognition problem has been massively studied, few research is aimed at investigating interaction strategies produced as response to inferred emotional states. The work described in this paper consists on conceiving and evaluating a dynamic in which, according to the user emotional state inferred through facial expressions analysis, two distinct interaction strategies are associated to a virtual character. An Android app, whose development is in progress, aggregates the user interface and interactive features. We have performed user experiments to evaluate whether the proposed dynamic is effective in producing more natural and empathic interaction.FAPESP (São Paulo State Research Support Foundation) (grant 2014/16862-4

    Multimodal interface for an intelligent wheelchair

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201

    Integrating computer vision techniques into a touch pad system

    Full text link
    A key strength of touchpads, such as iPads or Galaxy Tabs, is that they provide portable access to the Internet and many applications that entertain and help managing the lives of users. The integration of computer vision methods into touchpads results in even more powerful devices that enable natural human-computer interaction. This thesis proposes two techniques of incorporating computer vision methods -- one technique supports touch-based interaction for biomedical image analysis, the other camera-based interaction for music therapy and entertainment: I'mCell is an application for annotating objects in images, for example, cells in phase-contrast microscopy images. MusicTracks recognizes a user's facial expression, captured by the camera of the touchpad, and plays music according to the user's mood. The I'mCell and MusicTracks applications have been implemented for the iPad. Users who experimented with the applications report them to be convenient because they enable efficient (I'mCell) and enjoyable (MusicTracks) interactions and are easy-to-use and portable

    Sensing technologies and machine learning methods for emotion recognition in autism: Systematic review

    Get PDF
    Background: Human Emotion Recognition (HER) has been a popular field of study in the past years. Despite the great progresses made so far, relatively little attention has been paid to the use of HER in autism. People with autism are known to face problems with daily social communication and the prototypical interpretation of emotional responses, which are most frequently exerted via facial expressions. This poses significant practical challenges to the application of regular HER systems, which are normally developed for and by neurotypical people. Objective: This study reviews the literature on the use of HER systems in autism, particularly with respect to sensing technologies and machine learning methods, as to identify existing barriers and possible future directions. Methods: We conducted a systematic review of articles published between January 2011 and June 2023 according to the 2020 PRISMA guidelines. Manuscripts were identified through searching Web of Science and Scopus databases. Manuscripts were included when related to emotion recognition, used sensors and machine learning techniques, and involved children with autism, young, or adults. Results: The search yielded 346 articles. A total of 65 publications met the eligibility criteria and were included in the review. Conclusions: Studies predominantly used facial expression techniques as the emotion recognition method. Consequently, video cameras were the most widely used devices across studies, although a growing trend in the use of physiological sensors was observed lately. Happiness, sadness, anger, fear, disgust, and surprise were most frequently addressed. Classical supervised machine learning techniques were primarily used at the expense of unsupervised approaches or more recent deep learning models. Studies focused on autism in a broad sense but limited efforts have been directed towards more specific disorders of the spectrum. Privacy or security issues were seldom addressed, and if so, at a rather insufficient level of detail.This research has been partially funded by the Spanish project “Advanced Computing Architectures and Machine Learning-Based Solutions for Complex Problems in Bioinformatics, Biotechnology, and Biomedicine (RTI2018-101674-B-I00)” and the Andalusian project “Integration of heterogeneous biomedical information sources by means of high performance computing. Application to personalized and precision medicine (P20_00163)”. Funding for this research is provided by the EU Horizon 2020 Pharaon project ‘Pilots for Healthy and Active Ageing’ (no. 857188). Moreover, this research has received funding under the REMIND project Marie Sklodowska-Curie EU Framework for Research and Innovation Horizon 2020 (no. 734355). This research has been partially funded by the BALLADEER project (PROMETEO/2021/088) from the Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana. Furthermore, it has been partially funded by the AETHER-UA (PID2020-112540RB-C43) project from the Spanish Ministry of Science and Innovation. This work has been also partially funded by “La Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital”, under the project “Development of an architecture based on machine learning and data mining techniques for the prediction of indicators in the diagnosis and intervention of autism spectrum disorder. AICO/2020/117”. This study was also funded by the Colombian Government through Minciencias grant number 860 “international studies for doctorate”. This research has been partially funded by the Spanish Government by the project PID2021-127275OB-I00, FEDER “Una manera de hacer Europa”. Moreover, this contribution has been supported by the Spanish Institute of Health ISCIII through the DTS21-00047 project. Furthermore, this work was funded by COST Actions “HARMONISATION” (CA20122) and “A Comprehensive Network Against Brain Cancer” (Net4Brain - CA22103). Sandra Amador is granted by the Generalitat Valenciana and the European Social Fund (CIACIF/ 2022/233)
    corecore