16 research outputs found

    Reusable, Interactive, Multilingual Online Avatars

    Get PDF
    This paper details a system for delivering reusable, interactive multilingual avatars in online children’s games. The development of these avatars is based on the concept of an intelligent media object that can be repurposed across different productions. The system is both language and character independent, allowing content to be reused in a variety of contexts and locales. In the current implementation, the user is provided with an interactive animated robot character that can be dressed with a range of body parts chosen by the user in real-time. The robot character reacts to each selection of a new part in a different manner, relative to simple narrative constructs that define a number of scripted responses. Once configured, the robot character subsequently appears as a help avatar throughout the rest of the game. At time of writing, the system is currently in beta testing on the My Tiny Planets website to fully assess its effectiveness

    A VOWEL-STRESS EMOTIONAL SPEECH ANALYSIS METHOD

    Get PDF
    The analysis of speech, particularly for emotional content, is an open area of current research. This paper documents the development of a vowel-stress analysis framework for emotional speech, which is intended to provide suitable assessment of the assets obtained in terms of their prosodic attributes. The consideration of different levels of vowel-stress provides means by which the salient points of a signal may be analysed in terms of their overall priority to the listener. The prosodic attributes of these events can thus be assessed in terms of their overall significance, in an effort to provide a means of categorising the acoustic correlates of emotional speech. The use of vowel-stress is performed in conjunction with the definition of pitch and intensity contours, alongside other micro-prosodic information relating to voice quality

    LinguaTag: an Emotional Speech Analysis Application

    Get PDF
    The analysis of speech, particularly for emotional content, is an open area of current research. Ongoing work has developed an emotional speech corpus for analysis, and defined a vowel stress method by which this analysis may be performed. This paper documents the development of LinguaTag, an open source speech analysis software application which implements this vowel stress emotional speech analysis method developed as part of research into the acoustic and linguistic correlates of emotional speech. The analysis output is contained within a file format combining SMIL and SSML markup tags, to facilitate search and retrieval methods within an emotional speech corpus database. In this manner, analysis performed using LinguaTag aims to combine acoustic, emotional and linguistic descriptors in a single metadata framework

    Adaptación del CTH-URL para la competición ALBAYZIN 2008

    Get PDF
    En esta comunicación describimos el sistema de síntesis de voz presentado a la competición Albayzin 2008. Es un sistema que sigue un esquema clásico de concatenación de unidades basado en corpus. Cabe destacar que los costes de selección se han ajustado mediante un método basado en algoritmos genéticos y que no se ha utilizado ningún sistema de predicción prosódica. Se construyeron dos sistemas preliminares que diferían en el algoritmo de generación de forma de onda escogiendo el que se presenta a la competición mediante un test perceptual.Peer ReviewedPostprint (published version

    From Alan01 to AlanOnline - A study of the different characteristics of physical media installations and non-material art

    Get PDF
    This thesis is a study of the characteristics of the presentation media of artwork that can exist in physical and non-material form. Physical in this context refers to physical installations, and non-material is used to define artworks where the designer has little or no control over the presentation media, such as online artwork. I have chosen a set of characteristics, which I have found central to the topic, and my aim is to discover how such characteristics behave in practice. These key concepts are: technical aspects of the presentation media, human computer interaction, interface design, space, spatial narrative, collaborative experience, access, exhibition value, immersion, embodiment, real-world objects and metaphors. The set of characteristics is by no-means all-encompassing, but a selection that I have discovered through conversation with colleagues and professionals and through my personal research. It is also aimed to meet the requirements for the scopes of an MA thesis paper. The characteristics are discussed in reference to practical examples of artistic productions, and through my own work as a member of the production team that created the Alan01 installation and its non-material counterpart AlanOnline, which are used as a case study for this thesis

    Integración de la plataforma NINOS con Twitter en cliente web para la generación autónoma de material audiovisual personalizado

    Get PDF
    De un tiempo a esta parte, las imágenes generadas por ordenador han tomado una vital importancia en diversos sectores industriales como son el arte, los videojuegos, las películas y los anuncios, entre otros. Los recursos utilizados para su generación (ya sean imágenes en 2D, 3D CGI, personajes, decorados, o sonidos) son producidos en gran medida de forma independiente ante la ausencia de un marco unificador. Esto provoca que actualmente sea bastante extraña la transferencia de recursos digitales entre una película y un juego, por ejemplo, o la reutilización de los mismos en una nueva producción. NINOS es una herramienta surgida de un proyecto, cofinanciado por la Unión Europea a través del Sexto Programa Marco, que trata de poner solución a esta problemática. NINOS permite la generación automática de vídeos a partir de un conjunto de objetos y animaciones 3D prediseñadas así como archivos de audio que se componen y renderizan formando una escena tridimensional. La creación del vídeo se realiza en base a una plantilla con formato XML que relaciona, mediante una estructura de etiquetas, a los personajes, los sonidos, las cámaras y otros recursos audiovisuales que aparecerán en la escena, así como las interacciones entre éstos. Para comprobar de primera mano todas las características prometidas por NINOS, se construye en el seno de este Proyecto Fin de Carrera un sistema que pretende integrar la generación de animaciones y entornos renderizados de manera automática que proporciona NINOS al funcionamiento general de Twitter. La integración se realiza representando una escenificación de la lectura, por medio de un avatar, de los últimos tweets publicados bien por un mismo usuario, bien contengan un determinado fragmento de texto o bien pertenezcan a una conversación entre distintos usuarios. El resultado obtenido de la implementación del sistema se recoge en un demostrador en forma de una sencilla aplicación web con un funcionamiento similar a la aplicación real de Twitter, pero que es capaz de a partir de los tweets que se seleccionen generar un vídeo de manera automática obteniendo la información de éstos en tiempo real y presentarlo en la interfaz de la aplicación a través de un reproductor embebido. ____________________________________________________________________________________________________________________________Recently, computer-generated images have acquired much importance in various industrial sectors such as art, video games, movies and advertisements, among others. Resources used for their generation (2D or 3D images, CGI, characters, sets, or sounds) are mainly produced independently in absence of a unifying frame. This situation makes very strange the transfer of digital assets among movies and games, for instance, or their reuse in new productions. NINOS is a tool emerged from a project funded by the European Union's Sixth Framework Programme, which seeks to bring a solution to this problem. NINOS allows automatic generation of video from a set of objects, predesigned 3D animations and audio les that are composed and rendered to set up a three-dimensional scene. Creation of the video is done based on a XML template that relates characters, sounds, cameras and other audiovisual resources which appear on the scene, and the interactions among them. To check NINOS's features, a system is developed to integrate generation of animation and rendered environments that NINOS provides, with general functionality of Twitter. Main objective established is the generation of a performance with an avatar who reads last Twitter messages posted either by a single user, or containing a speci c piece of text or belonging to a conversation among di erent users. System implementation results are contained in a demonstrator with a simple webapp whose operation is similar to real Twitter app. Main di erence is that demo is capable of generating video automatically from tweets information gathered in real time and showing it in a player embedded in application interface.Ingeniería de Telecomunicació

    Semantics for virtual humans

    Get PDF
    Population of Virtual Worlds with Virtual Humans is increasing rapidly by people who want to create a virtual life parallel to the real one (i.e. Second Life). The evolution of technology is smoothly providing the necessary elements to increase realism within these virtual worlds by creating believable Virtual Humans. However, creating the amount of resources needed to succeed this believability is a difficult task, mainly because of the complexity of the creation process of Virtual Humans. Even though there are many existing available resources, their reusability is difficult because there is not enough information provided to evaluate if a model contains the desired characteristics to be reused. Additionally, the knowledge involved in the creation of Virtual Humans is not well known, nor well disseminated. There are several different creation techniques, different software components, and several processes to carry out before having a Virtual Human capable of populating a virtual environment. The creation of Virtual Humans involves: a geometrical representation with an internal control structure, the motion synthesis with different animation techniques, higher level controllers and descriptors to simulate human-like behavior such individuality, cognition, interaction capabilities, etc. All these processes require the expertise from different fields of knowledge such as mathematics, artificial intelligence, computer graphics, design, etc. Furthermore, there is neither common framework nor common understanding of how elements involved in the creation, development, and interaction of Virtual Humans features are done. Therefore, there is a need for describing (1) existing resources, (2) Virtual Human's composition and features, (3) a creation pipeline and (4) the different levels/fields of knowledge comprehended. This thesis presents an explicit representation of the Virtual Humans and their features to provide a conceptual framework that will interest to all people involved in the creation and development of these characters. This dissertation focuses in a semantic description of Virtual Humans. The creation of a semantic description involves gathering related knowledge, agreement among experts in the definition of concepts, validation of the ontology design, etc. In this dissertation all these procedures are presented, and an Ontology for Virtual Humans is described in detail together with the validations that conducted to the resulted ontology. The goal of creating such ontology is to promote reusability of existing resources; to create a shared knowledge of the creation and composition of Virtual Humans; and to support new research of the fields involved in the development of believable Virtual Humans and virtual environments. Finally, this thesis presents several developments that aim to demonstrate the ontology usability and reusability. These developments serve particularly to support the research on specialized knowledge of Virtual Humans, the population of virtual environments, and improve the believability of these characters
    corecore