2,152 research outputs found

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Painting the ideal home: using art to express visions of technologically supported independent living for older people in North East England

    Get PDF
    This paper describes the investigation of the development of future technological products to support older people in everyday living through the agency of a community art group. Recent research has identified a number of challenges facing designers seeking to use traditional participatory design approaches to gather technology requirements data from older people. Here, a project is described that sought to get a group of older people to think creatively about their needs and desires for technological support through the medium of paint. The artistic expression technique described in this article allowed the identification of issues that had also been found by previous research that used a range of different techniques. This indicates that the approach shows promise, as it allows information to be gathered in an environment that is comfortable and familiar using methods already known by the participants and which they find enjoyable. It provides a complement (or possible alternative) to standard protocols and has the potential benefit of extracting even richer information as the primary task for participants is enjoyable in its own right and is not associated with an interrogative process. Furthermore, it is argued that some of the key risks of traditional approaches are lessened or removed by the naturalistic setting of this approach

    Theory of Robot Communication: II. Befriending a Robot over Time

    Full text link
    In building on theories of Computer-Mediated Communication (CMC), Human-Robot Interaction, and Media Psychology (i.e. Theory of Affective Bonding), the current paper proposes an explanation of how over time, people experience the mediated or simulated aspects of the interaction with a social robot. In two simultaneously running loops, a more reflective process is balanced with a more affective process. If human interference is detected behind the machine, Robot-Mediated Communication commences, which basically follows CMC assumptions; if human interference remains undetected, Human-Robot Communication comes into play, holding the robot for an autonomous social actor. The more emotionally aroused a robot user is, the more likely they develop an affective relationship with what actually is a machine. The main contribution of this paper is an integration of Computer-Mediated Communication, Human-Robot Communication, and Media Psychology, outlining a full-blown theory of robot communication connected to friendship formation, accounting for communicative features, modes of processing, as well as psychophysiology.Comment: Hoorn, J. F. (2018). Theory of robot communication: II. Befriending a robot over time. arXiv:cs, 2502572(v1), 1-2

    Listening for Sirens: Locating and Classifying Acoustic Alarms in City Scenes

    Get PDF
    This paper is about alerting acoustic event detection and sound source localisation in an urban scenario. Specifically, we are interested in spotting the presence of horns, and sirens of emergency vehicles. In order to obtain a reliable system able to operate robustly despite the presence of traffic noise, which can be copious, unstructured and unpredictable, we propose to treat the spectrograms of incoming stereo signals as images, and apply semantic segmentation, based on a Unet architecture, to extract the target sound from the background noise. In a multi-task learning scheme, together with signal denoising, we perform acoustic event classification to identify the nature of the alerting sound. Lastly, we use the denoised signals to localise the acoustic source on the horizon plane, by regressing the direction of arrival of the sound through a CNN architecture. Our experimental evaluation shows an average classification rate of 94%, and a median absolute error on the localisation of 7.5{\deg} when operating on audio frames of 0.5s, and of 2.5{\deg} when operating on frames of 2.5s. The system offers excellent performance in particularly challenging scenarios, where the noise level is remarkably high.Comment: 6 pages, 9 figure

    Computational Audiovisual Scene Analysis

    Get PDF
    Yan R. Computational Audiovisual Scene Analysis. Bielefeld: Universitätsbibliothek Bielefeld; 2014.In most real-world situations, a robot is interacting with multiple people. In this case, understanding of the dialogs is essential. However, dialog scene analysis is missing in most existing systems of human-robot interaction. In such systems, only one speaker can talk with the robot or each speaker wears an attached microphone or a headset. The target of Computational AudioVisual Scene Analysis (CAVSA) is therefore making dialogs between humans and robots more natural and flexible. The CAVSA system is able to learn how many speakers are in the scenario, where the speakers are and who is currently speaking. CAVSA is a challenging task due to the complexity of dialogue scenarios. First, speakers are unknown in advance, thus a database for training high-level features beforehand to recognize faces or voices is not available. Second, people can dynamically come into and leave the scene, may move all the time and even change their locations outside the camera field of view. Third, the robot can not see all the people at the same time due to limited camera field of view and head movements. Moreover, a sound could be related to a person who stands outside the camera field of view and has never been seen. I will show that the CAVSA system is able to assign words to corresponding speakers. A speaker is recognized again when he leaves and enters the scene, or changes his position even with a newly appearing person
    • …
    corecore