172 research outputs found

    Reliability measurement without limits

    Get PDF
    In computational linguistics, a reliability measurement of 0.8 on some statistic such as κ\kappa is widely thought to guarantee that hand-coded data is fit for purpose, with lower values suspect. We demonstrate that the main use of such data, machine learning, can tolerate data with a low reliability as long as any disagreement among human coders looks like random noise. When it does not, however, data can have a reliability of more than 0.8 and still be unsuitable for use: the disagreement may indicate erroneous patterns that machine-learning can learn, and evaluation against test data that contain these same erroneous patterns may lead us to draw wrong conclusions about our machine-learning algorithms. Furthermore, lower reliability values still held as acceptable by many researchers, between 0.67 and 0.8, may even yield inflated performance figures in some circumstances. Although this is a common sense result, it has implications for how we work that are likely to reach beyond the machine-learning applications we discuss. At the very least, computational linguists should look for any patterns in the disagreement among coders and assess what impact they will have

    Determining what people feel and think when interacting with humans and machines

    Get PDF
    Any interactive software program must interpret the users’ actions and come up with an appropriate response that is intelligable and meaningful to the user. In most situations, the options of the user are determined by the software and hardware and the actions that can be carried out are unambiguous. The machine knows what it should do when the user carries out an action. In most cases, the user knows what he has to do by relying on conventions which he may have learned by having had a look at the instruction manual, having them seen performed by somebody else, or which he learned by modifying a previously learned convention. Some, or most, of the times he just finds out by trial and error. In user-friendly interfaces, the user knows, without having to read extensive manuals, what is expected from him and how he can get the machine to do what he wants. An intelligent interface is so-called, because it does not assume the same kind of programming of the user by the machine, but the machine itself can figure out what the user wants and how he wants it without the user having to take all the trouble of telling it to the machine in the way the machine dictates but being able to do it in his own words. Or perhaps by not using any words at all, as the machine is able to read off the intentions of the user by observing his actions and expressions. Ideally, the machine should be able to determine what the user wants, what he expects, what he hopes will happen, and how he feels

    Building Huys Hengelo in VRML

    Get PDF
    In this paper we report about our attempts to rebuild a historical building, ‘Huys Hengelo’, its interior, a farm built next to it and other parts of its environment (including a draw-bridge and a gate) using the Virtual Reality Modeling Language (VRML). This castle building played an important role in the history of its region. The main issues we deal with in this paper are: the unreliability of available sources − forcing us to show alternatives rather than ‘the building as it was’, the possibility to allow users to make changes and to experiment with different geographies, animations showing how parts of the wooden buildings were constructed during that time, the interface with the user and, as the project started as a student project on the request of some local historians and architects, some of our experiences with the co-operation between them and computer science students and researchers

    An Animation Framework for Continuous Interaction with Reactive Virtual Humans

    Get PDF
    We present a complete framework for animation of Reactive Virtual Humans that offers a mixed animation paradigm: control of different body parts switches between keyframe animation, procedural animation and physical simulation, depending on the requirements of the moment. This framework implements novel techniques to support real-time continuous interaction. It is demonstrated on our interactive Virtual Conductor

    Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter

    Get PDF
    Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment

    Intelligent multimedia indexing and retrieval through multi-source information extraction and merging

    Get PDF
    This paper reports work on automated meta-data\ud creation for multimedia content. The approach results\ud in the generation of a conceptual index of\ud the content which may then be searched via semantic\ud categories instead of keywords. The novelty\ud of the work is to exploit multiple sources of\ud information relating to video content (in this case\ud the rich range of sources covering important sports\ud events). News, commentaries and web reports covering\ud international football games in multiple languages\ud and multiple modalities is analysed and the\ud resultant data merged. This merging process leads\ud to increased accuracy relative to individual sources

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior.\ud As an example of social perception research in the VMR, we describe an experiment to assess human observers’ accuracy for head orientation

    Growing-up hand in hand with robots: Designing and evaluating child-robot interaction from a developmental perspective

    Get PDF
    Robots are becoming part of children's care, entertainment, education, social assistance and therapy. A steadily growing body of Human-Robot Interaction (HRI) research shows that child-robot interaction (CRI) holds promises to support children's development in novel ways. However, research has shown that technologies that do not take into account children's needs, abilities, interests, and developmental characteristics may have a limited or even negative impact on their physical, cognitive, social, emotional, and moral development. As a result, robotic technology that aims to support children via means of social interaction has to take the developmental perspective into consideration. With this workshop (the third of a series of workshops focusing CRI research), we aim to bring together researchers to discuss how a developmental perspective play a role for smart and natural interaction between robots and children. We invite participants to share their experiences on the challenges of taking the developmental perspective in CRI, such as long-term sustained interactions in the wild, involving children and other stakeholders in the design process and more. Looking across disciplinary boundaries, we hope to stimulate thought-provoking discussions on epistemology, methods, approaches, techniques, interaction scenarios and design principles focused on supporting children's development through interaction with robotic technology. Our goal does not only focus on the conception and formulation of the outcomes in the context of the workshop venue, but also on their establishment and availability for the HRI community in different forms

    Alertness, movement, and affective behaviour of people with profound intellectual and multiple disabilities (PIMD) on introduction of a playful interactive product:Can we get your attention?

    Get PDF
    Background: New technology may stimulate active leisure activities for people with profound intellectual and multiple disabilities (PIMD). We conducted a study of an interactive ball that responded to gross body movement, focus of attention, and vocalisations of users with PIMD. The aim was to increase alertness and body movement and elicit more expressions of positive, or fewer of negative affect.Method: Nine participants with PIMD played during 8–10 sessions. The movement was analysed automatically. Alertness and affective behaviour were coded manually. We analysed the last 5 sessions for each participant and compared 15 min of interaction with 15 min of rest.Results: Clearly positive effects were seen for three participants. Effects were seen in the unexpected direction for four participants. No strong effects were found for the remaining three participants.Conclusions: Interactive technologies may provide suitable activities for people with PIMD but individual differences play an important role
    corecore