4,290 research outputs found
Ubiquitous Emotion Recognition with Multimodal Mobile Interfaces
In 1997 Rosalind Picard introduced fundamental concepts of affect recognition. Since this time, multimodal interfaces such as Brain-computer interfaces (BCIs), RGB and depth cameras, physiological wearables, multimodal facial data and physiological data have been used to study human emotion. Much of the work in this field focuses on a single modality to recognize emotion. However, there is a wealth of information that is available for recognizing emotions when incorporating multimodal data. Considering this, the aim of this workshop is to look at current and future research activities and trends for ubiquitous emotion recognition through the fusion of data from various multimodal, mobile devices
Affective Medicine: a review of Affective Computing efforts in Medical Informatics
Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as âcomputing that relates to, arises from, or deliberately influences emotionsâ. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field
Linking recorded data with emotive and adaptive computing in an eHealth environment
Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development
Touch Screen Avatar English Learning System For University Students Learning Simplicity
This paper discusses on touch screen avatar for an English language learning application system. The system would be a combination of avatar as Animated Pedagogical Agent (APA) and a touch screen application that adapt the up to date gesture-based computing which is found as having potential to change the way how we learn as it could reduce the amount of Information Communication Technology (ICT) devices used during teaching and learning process. The key here is interaction between university students and touch screen avatar intelligent application system as well as learning resources that could be learned anytime anywhere twenty four hours in seven days 24/7 based on their study time preference where they could learn at their own comfort out of the tradition. The students would be provided with a learning tool that could help them learn interactively with the current trend which they might be interested with based on their own personalization. Apart from that, their performance shall be monitored from a distance and evaluated to avoid disturbing their learning process from working smoothly and getting rid of feeling of being controlled. Thus, the students are expected to have lower affective filter level that may enhance the way they learn unconsciously. Keywords: Gesture-Based Computing, Avatar, Portable Learning Tool, Interactivity, Language Learnin
Understanding face and eye visibility in front-facing cameras of smartphones used in the wild
Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations
On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces
Multimodal systems have attained increased attention in recent years, which has made possible important
improvements in the technologies for recognition, processing, and generation of multimodal information.
However, there are still many issues related to multimodality which are not clear, for example, the
principles that make it possible to resemble human-human multimodal communication. This chapter
focuses on some of the most important challenges that researchers have recently envisioned for future
multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable
and affective multimodal interfaces
Recommended from our members
Multimodal and ubiquitous computing systems: supporting independent-living older users
We document the rationale and design of a multimodal interface to a pervasive/ubiquitous computing system that supports independent living by older people in their own homes. The Millennium Home system involves fitting a residentâs home with sensors â these sensors can be used to trigger sequences of interaction with the resident to warn them about dangerous events, or to check if they need external help. We draw lessons from the design process and conclude the paper with implications for the design of multimodal interfaces to ubiquitous systems developed for the elderly and in healthcare, as well as for more general ubiquitous computing applications
- âŠ