967 research outputs found

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoP’s dancer sonification system is to “sonify the motion and emotion” of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described

    Inside the conductor's jacket : analysis, interpretation and musical synthesis of expressive gesture

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.Includes bibliographical references (leaves 154-167).We present the design and implementation of the Conductor's Jacket, a unique wearable device that measures physiological and gestural signals, together with the Gesture Construction, a musical software system that interprets these signals and applies them expressively in a musical context. Sixteen sensors have been incorporated into the Conductor's Jacket in such a way as to not encumber or interfere with the gestures of a working orchestra conductor. The Conductor's Jacket system gathers up to sixteen data channels reliably at rates of 3 kHz per channel, and also provides mcal-time graphical feedback. Unlike many gesture-sensing systems it not only gathers positional and accelerational data but also senses muscle tension from several locations on each arm. The Conductor's Jacket was used to gather conducting data from six subjects, three professional conductors and three students, during twelve hours of rehearsals and performances. Analyses of the data yielded thirty-five significant features that seem to reflect intuitive and natural gestural tendencies, including context-based hand switching, anticipatory 'flatlining' effects, and correlations between respiration and phrasing. The results indicate that muscle tension and respiration signals reflect several significant and expressive characteristics of a conductor's gestures. From these results we present nine hypotheses about human musical expression, including ideas about efficiency, intentionality, polyphony, signal-to-noise ratios, and musical flow state. Finally, this thesis describes the Gesture Construction, a musical software system that analyzes and performs music in real-time based on the performer's gestures and breathing signals. A bank of software filters extracts several of the features that were found in the conductor study, including beat intensities and the alternation between arms. These features are then used to generate real-time expressive effects by shaping the beats, tempos, articulations, dynamics, and note lengths in a musical score.by Teresa Marrin Nakra.Ph.D

    (re)new configurations:Beyond the HCI/Art Challenge: Curating re-new 2011

    Get PDF

    Re-new - IMAC 2011 Proceedings

    Get PDF

    Paralinguistic vocal control of interactive media: how untapped elements of voice might enhance the role of non-speech voice input in the user's experience of multimedia.

    Get PDF
    Much interactive media development, especially commercial development, implies the dominance of the visual modality, with sound as a limited supporting channel. The development of multimedia technologies such as augmented reality and virtual reality has further revealed a distinct partiality to visual media. Sound, however, and particularly voice, have many aspects which have yet to be adequately investigated. Exploration of these aspects may show that sound can, in some respects, be superior to graphics in creating immersive and expressive interactive experiences. With this in mind, this thesis investigates the use of non-speech voice characteristics as a complementary input mechanism in controlling multimedia applications. It presents a number of projects that employ the paralinguistic elements of voice as input to interactive media including both screen-based and physical systems. These projects are used as a means of exploring the factors that seem likely to affect users’ preferences and interaction patterns during non-speech voice control. This exploration forms the basis for an examination of potential roles for paralinguistic voice input. The research includes the conceptual and practical development of the projects and a set of evaluative studies. The work submitted for Ph.D. comprises practical projects (50 percent) and a written dissertation (50 percent). The thesis aims to advance understanding of how voice can be used both on its own and in combination with other input mechanisms in controlling multimedia applications. It offers a step forward in the attempts to integrate the paralinguistic components of voice as a complementary input mode to speech input applications in order to create a synergistic combination that might let the strengths of each mode overcome the weaknesses of the other

    Music-Based Procedural Content Generation for Games

    Get PDF
    A geração procedimental Ă© algo ainda recente no mundo acadĂ©mico, que se entende como a criação de conteĂșdos automaticamente via algoritmos. HĂĄ vĂĄrias razĂ”es para o desenvolvimento desta tĂ©cnica, mas a principal Ă© a redução da memĂłria consumida, pois os algoritmos de geração procedimentais sĂŁo capazes de gerar conteĂșdo em massa, ocupando ordens de magnitude menores em disco. Este procedimento Ă©, normalmente, utilizado em jogos para gerar nĂ­veis, mapas, vegetação, missĂ”es, sendo menos comum para gerar ou alterar o motor de jogo ou o comportamento de NPCs (Non-Player Character). Apesar de a maioria dos jogos possuĂ­rem mĂșsica, Ă© frequente este elemento apenas servir como suporte ao jogo e ajudar a criar o ambiente necessĂĄrio. Os jogos que utilizam a mĂșsica como fonte de informação para criar conteĂșdo jogĂĄvel ainda sĂŁo raros e mesmo nestes o conteĂșdo gerado Ă©, muitas vezes, gerado previamente e estĂĄtico. Nesse sentido, novos jogos tĂȘm vindo a diferenciar-se neste processo, nos quais a mĂșsica escolhida irĂĄ gerar conteĂșdos automaticamente e de forma diversa. O objetivo desta dissertação Ă© desenvolver um jogo, de forma completamente procedimental a partir de segmentos de mĂșsica, com o intuito de ser possĂ­vel diferenciar de forma significativa os diferentes nĂ­veis criados e ser capaz de tirar conclusĂ”es referentes Ă  utilização de mĂșsica como gerador procedimental de conteĂșdos. Jogo este que serĂĄ composto por missĂ”es de stealth onde Ă© necessĂĄrio ao jogador atravessar todo o nĂ­vel com os recursos que encontrar e sem ser visto/apanhado pelos inimigos. O jogo consistirĂĄ, entĂŁo, em receber uma mĂșsica ou segmento de mĂșsica como input e atravĂ©s de uma anĂĄlise individual poder recolher algumas caracterĂ­sticas importantes que o irĂŁo distinguir de outros. ApĂłs este processo, cada nĂ­vel serĂĄ criado consoante estes, permitindo diversidade em cada missĂŁo, principalmente de forma a condicionar o modo como esta serĂĄ jogada.The generation of content procedurally is still something that is emerging in the academic studies, and it is understood as the creation of content automatically, through algorithmic means. There are many reasons to develop this technic, but, mainly, it is used to decrease the memory used for this matter, as this algorithms can continuosly create great amount of content, using lesser space in disk. This procedure is already used in games to create different levels, maps, missions or, less common, to change the game engine or the NPCs (Non-Player Character) behaviour.Even though most games already use music, it is mostly used as a way of supporting the game and create all the needed environment to enhance the user experience. Games that use music as an input source to create playable content are still rare and, commonly, they have their content generated before and in a static way. Naturally, new games are becoming popular in this process by using the chosen songs to generate different content automatically.The goal of this disseration is to develop a game, in a complete procedural way, through music segments, so it would be possible to distinguish significantly different levels, as well as being able to create an opinion about the usage of music as a procedural content generation. This game will be consisted in different stealth missions where the player has to cross the entire level using the available resources in order to not being seen/caught by the enemies. So, the game will receive a music or a segment of it as an input and through a unique analysys it will collect some important features that will allow each segment to be different. After this process, each level will be generated following these features, allowing it to create diversity in each mission, mainly to change the way the game is meant to be played
    • 

    corecore