1,834 research outputs found

    Embodied responses to musical experience detected by human bio-feedback brain features in a geminoid augmented architecture

    Get PDF
    This paper presents the conceptual framework for a study of musical experience and the associated architecture centred on Human-Humanoid Interaction (HHI). On the grounds of the theoretical and experimental literature on the biological foundation of music, the grammar of music perception and the perception and feeling of emotions in music hearing, we argue that music cognition is specific and that it is realized by a cognitive capacity for music that consists of conceptual and affective constituents. We discuss the relationship between such constituents that enables understanding, that is extracting meaning from music at the different levels of the organization of sounds that are felt as bearers of affects and emotions. To account for the way such cognitive mechanisms are realized in music hearing and extended to movements and gestures we bring in the construct of tensions and of music experience as a cognitive frame. Finally, we describe the principled approach to the design and the architecture of a BCI-controlled robotic system that can be employed to map and specify the constituents of the cognitive capacity for music as well as to simulate their contribution to music meaning understanding in the context of music experience by displaying it through the Geminoid robot movements

    Can my robotic home cleaner be happy? Issues about emotional expression in non-bio-inspired robots.

    Get PDF
    In many robotic applications a robot body should have a functional shape that cannot include bio-inspired elements, but it would still be important that the robot can express emotions, moods, or a character, to make it acceptable, and to involve its users. Dynamic signals from movement can be exploited to provide this expression, while the robot is acting to perform its task. A research effort has been started to find general emotion expression models for actions that could be applied to any kind of robot to obtain believable and easily detectable emotional expressions. On his path, the need for a unified representation of emotional expression emerged. A framework to define action characteristics that could be used to represent emotions is proposed in this paper. Guidelines are provided to identify quantitative models and numerical values for parameters, which can be used to design and engineer emotional robot actions. A set of robots having different shapes, movement possibilities, and goals have been implemented following these guidelines. Thanks to the proposed framework, different models to implement emotional expression could now be compared in a sound way. The question mentioned in the title can now be answered in a justified way

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    Cyber-Narrative in Opera: Three Case Studies

    Full text link
    This dissertation looks at three newly composed operas that feature what I call cyber-narratives: a work in which the story itself is inextricably linked with digital technologies, such that the characters utilize, interact with, or are affected by digital technologies to such a pervasive extent that the impact of said technologies is thematized within the work. Through an analysis of chat rooms and real-time text communication in Nico Muhly’s Two Boys (2011), artificial intelligence in Søren Nils Eichberg’s Glare (2014), and mind uploading and digital immortality in Tod Machover’s Death and the Powers (2010), a nexus of ideologies surrounding voice, the body, gender, digital anthropology, and cyber-culture are revealed. I consider the interpretive possibilities that emerge when analyzing voice and musical elements in conjunction with cultural references within the libretti, visual design choices in the productions, and directorial decisions in the evolution of each work. I theorize the expressive power of the operatic medium in dramatizing and personifying new forms of technology, while simultaneously exposing how these technologically oriented narratives reinforce and rely upon operatic tropes of the past. Recurring themes of misogyny and objectification of women across all three works are addressed, as is the framing of digital technology as a mechanism of dehumanization. This analysis also focuses on the unique sung and embodied aspect of opera, and how the human voice shapes concepts of identity, agency, and individuality in the digital age. All three case studies demonstrate how opera gives the cyber-narrative every possible mode of expression to explore the complexities and anxieties of human-machine relationships in the digital era, as all three operas question how the thematized technologies may come to re-define our perception and experience of humanity itself

    Strategies of authentication in Japanese experimental music

    Get PDF
    I denne oppgaven har jeg valgt å ta for meg på hvilken måte japanske eksperimentelle musikere forholder seg til krav om autensitet. Det finnes ingen klar definisjon på autensitet, og inneholdet i ordet kan i stor grad variere utifra hvem som bruker det og i hvilken situasjon det brukes. Allikevel finnes det enkelte definisjoner som har hatt større gjennomslagskraft enn andre. Kort oppsummert blir det som av en eller annen grunn oppfattes som 'ekte' og 'ikke kopiert' også ansett som autentisk. I en musikksammenheng kan det brukes om ulike former for folkemusikk, fordi folkemusikk representerer den ‘ekte’ kulturen den oppsto i. Dette kalles ofte kulturell autensitet. Personlig autensitet oppnås i det en musiker eller gruppe musikere fremfører musikk med ett personlig budskap. Kommersiell pop-musikk oppfattes derfor av mange som lite autentisk fordi dens funksjon i stor grad er å generere penger til plateselskaper. Fordi dette er en av hovedprioriteringene i kommersiell musikk, krever det at musikken følger en rekker konvensjonelle regler og strukturer. På den annen side finnes eksperimentell musikk. Eksperimentell musikk oppfattes ofte som mer autentisk en kommersiell musikk fordi den først og fremst krever individualisme, kreativitet og originalitet. I Japan finnes det i dag et innflytelsesrikt eksperimentelt musikkmiljø, som i økende grad har fått oppmerksomhet fra tilsvarende miljøer i vesten. Mange japanske musikere har fått oppmerksomhet i vesten for sin utradisjonelle og kreative tilnærming musikk. Samtidig finnes det en rekke stereotypier og fordommer om japanere som i sin ytterste konsekvens ville tilsi at et slikt miljø ikke kan oppstå i det japanske samfunnet. Både i vesten og i Japan er det en utbredt oppfatning at japanere er gruppeorienterte i motsetning til individualistene i vesten. Japanere er teknisk flinke men har vanskelig for å uttrykke sin egen personlighet. En av de kanskje mest utbredte fordommene er at japanere kopierer vesten mens de selv er uoriginale og lite kreative. Flere av disse antatte karakteristikkene ville gjøre det umulig for japansk musikk å kunne oppfattes som autentisk. I denne oppgaven vil jeg forsøke å vise hvilke strategier Japanske musikere har tatt i bruk for å imøtegå dette problemet

    Application of Intermediate Multi-Agent Systems to Integrated Algorithmic Composition and Expressive Performance of Music

    Get PDF
    We investigate the properties of a new Multi-Agent Systems (MAS) for computer-aided composition called IPCS (pronounced “ipp-siss”) the Intermediate Performance Composition System which generates expressive performance as part of its compositional process, and produces emergent melodic structures by a novel multi-agent process. IPCS consists of a small-medium size (2 to 16) collection of agents in which each agent can perform monophonic tunes and learn monophonic tunes from other agents. Each agent has an affective state (an “artificial emotional state”) which affects how it performs the music to other agents; e.g. a “happy” agent will perform “happier” music. The agent performance not only involves compositional changes to the music, but also adds smaller changes based on expressive music performance algorithms for humanization. Every agent is initialized with a tune containing the same single note, and over the interaction period longer tunes are built through agent interaction. Agents will only learn tunes performed to them by other agents if the affective content of the tune is similar to their current affective state; learned tunes are concatenated to the end of their current tune. Each agent in the society learns its own growing tune during the interaction process. Agents develop “opinions” of other agents that perform to them, depending on how much the performing agent can help their tunes grow. These opinions affect who they interact with in the future. IPCS is not a mapping from multi-agent interaction onto musical features, but actually utilizes music for the agents to communicate emotions. In spite of the lack of explicit melodic intelligence in IPCS, the system is shown to generate non-trivial melody pitch sequences as a result of emotional communication between agents. The melodies also have a hierarchical structure based on the emergent social structure of the multi-agent system and the hierarchical structure is a result of the emerging agent social interaction structure. The interactive humanizations produce micro-timing and loudness deviations in the melody which are shown to express its hierarchical generative structure without the need for structural analysis software frequently used in computer music humanization
    corecore