4,257 research outputs found

    The State of Speech in HCI: Trends, Themes and Challenges

    Get PDF

    Determining what people feel and think when interacting with humans and machines

    Get PDF
    Any interactive software program must interpret the users’ actions and come up with an appropriate response that is intelligable and meaningful to the user. In most situations, the options of the user are determined by the software and hardware and the actions that can be carried out are unambiguous. The machine knows what it should do when the user carries out an action. In most cases, the user knows what he has to do by relying on conventions which he may have learned by having had a look at the instruction manual, having them seen performed by somebody else, or which he learned by modifying a previously learned convention. Some, or most, of the times he just finds out by trial and error. In user-friendly interfaces, the user knows, without having to read extensive manuals, what is expected from him and how he can get the machine to do what he wants. An intelligent interface is so-called, because it does not assume the same kind of programming of the user by the machine, but the machine itself can figure out what the user wants and how he wants it without the user having to take all the trouble of telling it to the machine in the way the machine dictates but being able to do it in his own words. Or perhaps by not using any words at all, as the machine is able to read off the intentions of the user by observing his actions and expressions. Ideally, the machine should be able to determine what the user wants, what he expects, what he hopes will happen, and how he feels

    Tell Me How You Feel: Designing Emotion-Aware Voicebots to Ease Pandemic Anxiety In Aging Citizens

    Full text link
    The feeling of anxiety and loneliness among aging population has been recently amplified by the COVID-19 related lockdowns. Emotion-aware multimodal bot application combining voice and visual interface was developed to address the problem in the group of older citizens. The application is novel as it combines three main modules: information, emotion selection and psychological intervention, with the aim of improving human well-being. The preliminary study with target group confirmed that multimodality improves usability and that the information module is essential for participating in a psychological intervention. The solution is universal and can also be applied to areas not directly related to COVID-19 pandemic.Comment: 16 page

    Affective Interaction in Smart Environments

    Get PDF
    AbstractWe present a concept where the smart environments of the future will be able to provide ubiquitous affective communication. All the surfaces will become interactive and the furniture will display emotions. In particular, we present a first prototype that allows people to share their emotional states in a natural way. The input will be given through facial expressions and the output will be displayed in a context-aware multimodal way. Two novel output modalities are presented: a robotic painting that applies the concept of affective communication to the informative art and an RGB lamp that represents the emotions remaining in the user's peripheral attention. An observation study has been conducted during an interactive event and we report our preliminary findings in this paper

    Enabling audio-haptics

    Get PDF
    This thesis deals with possible solutions to facilitate orientation, navigation and overview of non-visual interfaces and virtual environments with the help of sound in combination with force-feedback haptics. Applications with haptic force-feedback, s

    The role of edutainment in e-learning: An empirical study.

    Get PDF
    Impersonal, non-face-to-face contact and text-based interfaces, in the e-Learning segment, present major problems that are encountered by learners, since they are out on vital personal interactions and useful feedback messages, as well as on real-time information about their learning performance. This research programme suggests a multimodal, combined with an edutainment approach, which is expected to improve the communications between users and e-Learning systems. This thesis empirically investigates users’ effectiveness; efficiency and satisfaction, in order to determine the influence of edutainment, (e.g. amusing speech and facial expressions), combined with multimodal metaphors, (e.g. speech, earcon, avatar, etc.), within e-Learning environments. Besides text, speech, visual, and earcon modalities, avatars are incorporated to offer a visual and listening realm, in online learning. The methodology used for this research project comprises a literature review, as well as three experimental platforms. The initial experiment serves as a first step towards investigating the feasibility of completing all the tasks and objectives in the research project, outlined above. The remaining two experiments explore, further, the role of edutainment in enhancing e-Learning user interfaces. The overall challenge is to enhance user-interface usability; to improve the presentation of learning, in e-Learning systems; to improve user enjoyment; to enhance interactivity and learning performance; and, also, to contribute in developing guidelines for multimodal involvement, in the context of edutainment. The results of the experiments presented in this thesis show an improvement in user enjoyment, through satisfaction measurements. In the first experiment, the enjoyment level increased by 11%, in the Edutainment (E) platform, compared to the Non-edutainment (NE) interface. In the second experiment, the Game-Based Learning (GBL) interface obtained 14% greater enhancement than the Virtual Class (VC) interface and 20.85% more than the Storytelling interface; whereas, the percentage obtained by the game incorporated with avatars increased by an extra 3%, compared with the other platforms, in the third experiment. In addition, improvement in both user performance and learning retention were detected through effective and efficiency measurements. In the first experiment, there was no significant difference between mean values of time, for both conditions (E) & (NE) which were not found to be significant, when tested using T-test. In the second experiment, the time spent in condition (GBL) was higher by 7-10 seconds, than in the other conditions. In the third experiment, the mean values of the time taken by the users, in all conditions, were comparable, with an average of 22.8%. With regards to effectiveness, the findings of the first experiment showed, generally, that the mean correct answer for condition (E) was higher by 20%, than the mean for condition (NE). Users in condition (GBL) performed better than the users in the other conditions, in the second experiment. The percentage of correct answers, in the second experiment, was higher by 20% and by 34.7%, in condition (GBL), than in the (VC) and (ST), respectively. Finally, a set of empirically derived guidelines was produced for the design of usable multimodal e-Learning and edutainment interfaces.Libyan Embass
    • …
    corecore