46,346 research outputs found

    An End-to-End Conversational Style Matching Agent

    Full text link
    We present an end-to-end voice-based conversational agent that is able to engage in naturalistic multi-turn dialogue and align with the interlocutor's conversational style. The system uses a series of deep neural network components for speech recognition, dialogue generation, prosodic analysis and speech synthesis to generate language and prosodic expression with qualities that match those of the user. We conducted a user study (N=30) in which participants talked with the agent for 15 to 20 minutes, resulting in over 8 hours of natural interaction data. Users with high consideration conversational styles reported the agent to be more trustworthy when it matched their conversational style. Whereas, users with high involvement conversational styles were indifferent. Finally, we provide design guidelines for multi-turn dialogue interactions using conversational style adaptation

    Agent oriented AmI engineering

    Get PDF

    An Approach to Agent-Based Service Composition and Its Application to Mobile

    Get PDF
    This paper describes an architecture model for multiagent systems that was developed in the European project LEAP (Lightweight Extensible Agent Platform). Its main feature is a set of generic services that are implemented independently of the agents and can be installed into the agents by the application developer in a flexible way. Moreover, two applications using this architecture model are described that were also developed within the LEAP project. The application domain is the support of mobile, virtual teams for the German automobile club ADAC and for British Telecommunications

    Toward alive art

    Get PDF
    Electronics is about to change the idea of art and drastically so. We know this is going to happen - we can feel it. Much less clear to most of us are the hows, whens and whys of the change. In this paper, we will attempt to analyze the mechanisms and dynamics of the coming cultural revolution, focusing on the Ā«artistic spaceĀ» where the revolution is taking place, on the interactions between the artistic act and the space in which the act takes place and on the way in which the act modifies the space and the space the act. We briefly discuss the new category of Ā«electronic artistsĀ». We then highlight what we see as the logical process connecting the past, the present and our uncertain future. We examine the relationship between art and previous technologies, pointing to the evolutionary, as well as the revolutionary impact of new means of expression. Against this background we propose a definition for what we call Ā«Alive ArtĀ», going on to develop a tentative profile of the performers (the Ā«AliversĀ»). In the last section, we describe two examples of Alive Artworks, pointing out the central role of what we call the "Alive Art Effect" in which we can perceive relative independence of creation from the artist and thus it may seem that unique creative role of artist is not always immediate and directly induced by his/her activity. We actually, emphasized that artist's activities may result in unpredictable processes more or less free of the artist's will

    Space languages

    Get PDF
    Applications of linguistic principles to potential problems of human and machine communication in space settings are discussed. Variations in language among speakers of different backgrounds and change in language forms resulting from new experiences or reduced contact with other groups need to be considered in the design of intelligent machine systems

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear
    • ā€¦
    corecore