36,419 research outputs found

    Generating multimedia presentations: from plain text to screenplay

    Get PDF
    In many Natural Language Generation (NLG) applications, the output is limited to plain text – i.e., a string of words with punctuation and paragraph breaks, but no indications for layout, or pictures, or dialogue. In several projects, we have begun to explore NLG applications in which these extra media are brought into play. This paper gives an informal account of what we have learned. For coherence, we focus on the domain of patient information leaflets, and follow an example in which the same content is expressed first in plain text, then in formatted text, then in text with pictures, and finally in a dialogue script that can be performed by two animated agents. We show how the same meaning can be mapped to realisation patterns in different media, and how the expanded options for expressing meaning are related to the perceived style and tone of the presentation. Throughout, we stress that the extra media are not simple added to plain text, but integrated with it: thus the use of formatting, or pictures, or dialogue, may require radical rewording of the text itself

    Multiple Media Interfaces for Music Therapy

    Get PDF
    This article describes interfaces (and the supporting technological infrastructure) to create audiovisual instruments for use in music therapy. In considering how the multidimensional nature of sound requires multidimensional input control, we propose a model to help designers manage the complex mapping between input devices and multiple media software. We also itemize a research agenda

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    GEMINI: A Generic Multi-Modal Natural Interface Framework for Videogames

    Full text link
    In recent years videogame companies have recognized the role of player engagement as a major factor in user experience and enjoyment. This encouraged a greater investment in new types of game controllers such as the WiiMote, Rock Band instruments and the Kinect. However, the native software of these controllers was not originally designed to be used in other game applications. This work addresses this issue by building a middleware framework, which maps body poses or voice commands to actions in any game. This not only warrants a more natural and customized user-experience but it also defines an interoperable virtual controller. In this version of the framework, body poses and voice commands are respectively recognized through the Kinect's built-in cameras and microphones. The acquired data is then translated into the native interaction scheme in real time using a lightweight method based on spatial restrictions. The system is also prepared to use Nintendo's Wiimote as an auxiliary and unobtrusive gamepad for physically or verbally impractical commands. System validation was performed by analyzing the performance of certain tasks and examining user reports. Both confirmed this approach as a practical and alluring alternative to the game's native interaction scheme. In sum, this framework provides a game-controlling tool that is totally customizable and very flexible, thus expanding the market of game consumers.Comment: WorldCIST'13 Internacional Conferenc

    Balancing the power of multimedia information retrieval and usability in designing interactive TV

    Get PDF
    Steady progress in the field of multimedia information retrieval (MMIR) promises a useful set of tools that could provide new usage scenarios and features to enhance the user experience in today s digital media applications. In the interactive TV domain, the simplicity of interaction is more crucial than in any other digital media domain and ultimately determines the success or otherwise of any new applications. Thus when integrating emerging tools like MMIR into interactive TV, the increase in interface complexity and sophistication resulting from these features can easily reduce its actual usability. In this paper we describe a design strategy we developed as a result of our e®ort in balancing the power of emerging multimedia information retrieval techniques and maintaining the simplicity of the interface in interactive TV. By providing multiple levels of interface sophistication in increasing order as a viewer repeatedly presses the same button on their remote control, we provide a layered interface that can accommodate viewers requiring varying degrees of power and simplicity. A series of screen shots from the system we have actually developed and built illustrates how this is achieved

    Exploring heritage through time and space : Supporting community reflection on the highland clearances

    Get PDF
    On the two hundredth anniversary of the Kildonan clearances, when people were forcibly removed from their homes, the Timespan Heritage centre has created a program of community centred work aimed at challenging pre conceptions and encouraging reflection on this important historical process. This paper explores the innovative ways in which virtual world technology has facilitated community engagement, enhanced visualisation and encouraged reflection as part of this program. An installation where users navigate through a reconstruction of pre clearance Caen township is controlled through natural gestures and presented on a 300 inch six megapixel screen. This environment allows users to experience the past in new ways. The platform has value as an effective way for an educator, artist or hobbyist to create large scale virtual environments using off the shelf hardware and open source software. The result is an exhibit that also serves as a platform for experimentation into innovative ways of community co-creation and co-curation.Postprin

    Customized television: Standards compliant advanced digital television

    Get PDF
    This correspondence describes a European Union supported collaborative project called CustomTV based on the premise that future TV sets will provide all sorts of multimedia information and interactivity, as well as manage all such services according to each user’s or group of user’s preferences/profiles. We have demonstrated the potential of recent standards (MPEG-4 and MPEG-7) to implement such a scenario by building the following services: an advanced EPG, Weather Forecasting, and Stock Exchange/Flight Information

    QoE-Based Low-Delay Live Streaming Using Throughput Predictions

    Full text link
    Recently, HTTP-based adaptive streaming has become the de facto standard for video streaming over the Internet. It allows clients to dynamically adapt media characteristics to network conditions in order to ensure a high quality of experience, that is, minimize playback interruptions, while maximizing video quality at a reasonable level of quality changes. In the case of live streaming, this task becomes particularly challenging due to the latency constraints. The challenge further increases if a client uses a wireless network, where the throughput is subject to considerable fluctuations. Consequently, live streams often exhibit latencies of up to 30 seconds. In the present work, we introduce an adaptation algorithm for HTTP-based live streaming called LOLYPOP (Low-Latency Prediction-Based Adaptation) that is designed to operate with a transport latency of few seconds. To reach this goal, LOLYPOP leverages TCP throughput predictions on multiple time scales, from 1 to 10 seconds, along with an estimate of the prediction error distribution. In addition to satisfying the latency constraint, the algorithm heuristically maximizes the quality of experience by maximizing the average video quality as a function of the number of skipped segments and quality transitions. In order to select an efficient prediction method, we studied the performance of several time series prediction methods in IEEE 802.11 wireless access networks. We evaluated LOLYPOP under a large set of experimental conditions limiting the transport latency to 3 seconds, against a state-of-the-art adaptation algorithm from the literature, called FESTIVE. We observed that the average video quality is by up to a factor of 3 higher than with FESTIVE. We also observed that LOLYPOP is able to reach a broader region in the quality of experience space, and thus it is better adjustable to the user profile or service provider requirements.Comment: Technical Report TKN-16-001, Telecommunication Networks Group, Technische Universitaet Berlin. This TR updated TR TKN-15-00
    corecore