20 research outputs found

    Designing Sound for Social Robots: Advancing Professional Practice through Design Principles

    Full text link
    Sound is one of the core modalities social robots can use to communicate with the humans around them in rich, engaging, and effective ways. While a robot's auditory communication happens predominantly through speech, a growing body of work demonstrates the various ways non-verbal robot sound can affect humans, and researchers have begun to formulate design recommendations that encourage using the medium to its full potential. However, formal strategies for successful robot sound design have so far not emerged, current frameworks and principles are largely untested and no effort has been made to survey creative robot sound design practice. In this dissertation, I combine creative practice, expert interviews, and human-robot interaction studies to advance our understanding of how designers can best ideate, create, and implement robot sound. In a first step, I map out a design space that combines established sound design frameworks with insights from interviews with robot sound design experts. I then systematically traverse this space across three robot sound design explorations, investigating (i) the effect of artificial movement sound on how robots are perceived, (ii) the benefits of applying compositional theory to robot sound design, and (iii) the role and potential of spatially distributed robot sound. Finally, I implement the designs from prior chapters into humanoid robot Diamandini, and deploy it as a case study. Based on a synthesis of the data collection and design practice conducted across the thesis, I argue that the creation of robot sound is best guided by four design perspectives: fiction (sound as a means to convey a narrative), composition (sound as its own separate listening experience), plasticity (sound as something that can vary and adapt over time), and space (spatial distribution of sound as a separate communication channel). The conclusion of the thesis presents these four perspectives and proposes eleven design principles across them which are supported by detailed examples. This work contributes an extensive body of design principles, process models, and techniques providing researchers and designers with new tools to enrich the way robots communicate with humans

    Examining Cognitive Empathy Elements within AI Chatbots for Healthcare Systems

    Get PDF
    Empathy is an essential part of communication in healthcare. It is a multidimensional concept and the two key dimensions: emotional and cognitive empathy allow clinicians to understand a patient’s situation, reasoning, and feelings clearly (Mercer and Reynolds, 2002). As artificial intelligence (AI) is increasingly being used in healthcare for many routine tasks, accurate diagnoses, and complex treatment plans, it is becoming more crucial to incorporate clinical empathy into patient-faced AI systems. Unless patients perceive that the AI is understanding their situation, the communication between patient and AI may not sustain efficiently. AI may not really exhibit any emotional empathy at present, but it has the capability to exhibit cognitive empathy by communicating how it can understand patients’ reasoning, perspectives, and point of view. In my dissertation, I examine this issue across three separate lab experiments and one interview study. At first, I developed AI Cognitive Empathy Scale (AICES) and tested all empathy (emotional and cognitive) components together in a simulated scenario against control for patient-AI interaction for diagnosis purposes. In the second experiment, I tested the empathy components separately against control in different simulated scenarios. I identified six cognitive empathy elements from the interview study with first-time mothers, two of these elements were unique from the past literature. In the final lab experiment, I tested different cognitive empathy components separately based on the results from the interview study in simulated scenarios to examine which element emerges as the most effective. Finally, I developed a conceptual model of cognitive empathy for patient-AI interaction connecting the past literature and the observations from my studies. Overall, cognitive empathy elements show promise to create a shared understanding in patients-AI communication that may lead to increased patient satisfaction and willingness to use AI systems for initial diagnosis purposes

    Analyse de la performance musicale et synthèse sonore rapide

    Get PDF
    Cette thèse d’informatique musicale explore d’une part la performance instrumentale et propose d’autre part des algorithmes de synthèse additive rapide. Un état de l’art sur l’analyse du jeu instrumental est d’abord réalisé, explorant les différentes composantes de l’interprétation musicale. Une étude sur l’importance du doigté au piano est alors présentée. La performance pianistique est ainsi analysée pour mettre en évidence l’influence du doigté sur la performance. Le jeu académique au saxophone est aussi analysé, afin d’évaluer le niveau technique de saxophonistes en fonction de l’évolution de leurs paramètres sonores. Une méthode automatique de cette évaluation est alors proposée. Dans une deuxième partie, nous explorons des algorithmes de synthèse additive rapide. Nous étudions d’abord la possibilité d’avoir recours à des techniques non linéaires. Nous présentons ensuite PASS, une nouvelle méthode où des polynômes remplacent des fonctions sinusoïdales pour accélérer la synthèse sonore. Une application de la méthode PASS est finalement présentée, permettant la synthèse rapide et réaliste de surfaces océaniques.This document deals in a first part with musical performance and proposes in a second part some algorithms for fast sound synthesis. We begin by discussing the several parameters which could be extracted from a mu- sical performance. We expose then our work on the piano fingering. A lot of biomechanic studies have previously highlight the influence of the physiology of a pianist in his perfor- mance, particularly due to his hands. This knowledge lead us to propose a new method of automatic fingering which uses dynamic fingering. We propose also to evaluate the tech- nical level of a musical performer by analysing non expressive performances like scales. Our results are based on the analysis of alto saxophone performances, however the same approach can be used with other instruments. Our aim is to highlight the technical part in the performance by considering the evolution of the spectral parameters of the sound. The second part of the document propose to study the use of non linear methods of sound synthesis for the additive synthesis. Then, we propose a new fast sound synthesis method using polynomials. This is an additive method where polynomials are used to approximate sine functions. Practical implementations show that this method called Poly- nomial Additive Sound Synthesis (PASS) is particularly efficient for low-frequency signals. We propose finally to adapt fast sound synthesis methods to the simulation of ocean waves

    Paradoxes of Interactivity

    Get PDF
    Current findings from anthropology, genetics, prehistory, cognitive and neuroscience indicate that human nature is grounded in a co-evolution of tool use, symbolic communication, social interaction and cultural transmission. Digital information technology has recently entered as a new tool in this co-evolution, and will probably have the strongest impact on shaping the human mind in the near future. A common effort from the humanities, the sciences, art and technology is necessary to understand this ongoing co- evolutionary process. Interactivity is a key for understanding the new relationships formed by humans with social robots as well as interactive environments and wearables underlying this process. Of special importance for understanding interactivity are human-computer and human-robot interaction, as well as media theory and New Media Art. »Paradoxes of Interactivity« brings together reflections on »interactivity« from different theoretical perspectives, the interplay of science and art, and recent technological developments for artistic applications, especially in the realm of sound

    Musicianship for Robots with Style

    Get PDF
    ABSTRACT In this paper we introduce a System conceived to serve as the "musical brain" of autonomous musical robots or agent-based software simulations of robotic systems. Our research goal is to provide robots with the ability to integrate with the musical culture of their surroundings. In a multi-agent configuration, the System can simulate an environment in which autonomous agents interact with each other as well as with external agents (e.g., robots, human beings or other systems). The main outcome of these interactions is the transformation and development of their musical styles as well as the musical style of the environment in which they live

    Robotics in Germany and Japan

    Get PDF
    This book comprehends an intercultural and interdisciplinary framework including current research fields like Roboethics, Hermeneutics of Technologies, Technology Assessment, Robotics in Japanese Popular Culture and Music Robots. Contributions on cultural interrelations, technical visions and essays are rounding out the content of this book

    Timbral Learning for Musical Robots

    Get PDF
    abstract: The tradition of building musical robots and automata is thousands of years old. Despite this rich history, even today musical robots do not play with as much nuance and subtlety as human musicians. In particular, most instruments allow the player to manipulate timbre while playing; if a violinist is told to sustain an E, they will select which string to play it on, how much bow pressure and velocity to use, whether to use the entire bow or only the portion near the tip or the frog, how close to the bridge or fingerboard to contact the string, whether or not to use a mute, and so forth. Each one of these choices affects the resulting timbre, and navigating this timbre space is part of the art of playing the instrument. Nonetheless, this type of timbral nuance has been largely ignored in the design of musical robots. Therefore, this dissertation introduces a suite of techniques that deal with timbral nuance in musical robots. Chapter 1 provides the motivating ideas and introduces Kiki, a robot designed by the author to explore timbral nuance. Chapter 2 provides a long history of musical robots, establishing the under-researched nature of timbral nuance. Chapter 3 is a comprehensive treatment of dynamic timbre production in percussion robots and, using Kiki as a case-study, provides a variety of techniques for designing striking mechanisms that produce a range of timbres similar to those produced by human players. Chapter 4 introduces a machine-learning algorithm for recognizing timbres, so that a robot can transcribe timbres played by a human during live performance. Chapter 5 introduces a technique that allows a robot to learn how to produce isolated instances of particular timbres by listening to a human play an examples of those timbres. The 6th and final chapter introduces a method that allows a robot to learn the musical context of different timbres; this is done in realtime during interactive improvisation between a human and robot, wherein the robot builds a statistical model of which timbres the human plays in which contexts, and uses this to inform its own playing.Dissertation/ThesisDoctoral Dissertation Media Arts and Sciences 201

    Développement d'une plate-forme robotisée pour l'étude des instruments de musique à cordes pincées

    Get PDF
    The study of musical instruments involves the study of musicians, instruments and of the complex interaction that exists between them. The analysis of musical gestures requires numerous measurements on musicians to extract the relevant parameters in order to model their interaction. In the case of plucked string instruments, the goal is to determine the initial conditions imposed on the string by the plucking mechanism (plectrum, finger). How does one get all these parameters without disrupting the musician in playing conditions ? How can one know that the parameters are the best ones to describe the initial conditions of the string vibrations and its acoustic signature ? An experimental platform has been designed to answer these questions. It can reproduce the gesture of a musician, in particular of a harpist or a harpsichordist. It should be pointed out that the concept of a musical gesture is defined here in a broad sense : the robot can reproduce either the path followed by the musician's fingers, or the initial conditions resulting from this trajectory. The first method is particularly suited for the resolution of an inverse dynamic problem. One can then calculate the forces developed by the musician's muscles during the execution of a musical piece, for example. The second method is better suited for imposing specific initial conditions on the instrument through trajectories designed by the experimenter. The correct reproduction of the trajectories needs to reject disturbances due to the contact between the robot and the instrument. The design of a force sensor, integrated into the robot end effector, is a first step toward satisfying this requirement. After the design of the robotic platform, its precision and repeatability is investigated. The force sensor is then integrated on the robot end effector, and an example of its use is presented. The experiment is focused on the harmonization of the harpsichord plectra. Harmonization is a complex process of adjustments achieved by the luthier on the instrument. A model of the plectrum / string interaction, taking into account the geometry of the plectrum, as well as experiments performed on a real harpsichord, show that harmonization have an impact on the string initial conditions of vibration.L'étude mécanique des instruments de musique met en oeuvre l'étude des musiciens, des instruments et de l'interaction complexe qui existe entre eux. L'analyse du geste musical nécessite de nombreuses mesures sur des musiciens pour en extraire les paramètres pertinents qui permettent de construire un modèle d'interaction musicien / instrument. Dans le cas des instruments à corde pincées, il s'agit de déterminer les conditions initiales imposées à la corde par le mécanisme de pincement contrôlé par le musicien (plectre, doigt). Comment obtenir tous ces paramètres sans perturber le jeu du musicien ? Comment vérifier qu'ils sont les seuls à déterminer la vibration future de la corde et lui donner sa signature acoustique ? Une plateforme expérimentale robotisée a été mise en place pour répondre à ces questions. Elle permet de reproduire le geste des musiciens, en particulier des harpistes et des clavecinistes. Il faut préciser ici que la notion de geste musical s'entend au sens large : le robot peut soit reproduire complètement la trajectoire suivie par le doigt du musicien, soit imposer les conditions initiales résultant de cette trajectoire, indépendamment du chemin suivi. Le premier cas est adapté à la résolution de problème de dynamique inverse pour accéder aux efforts articulaires mis en jeu par le musicien pendant l'accomplissement d'un extrait musical. La second cas sera privilégié pour imposer des conditions initiales à l'instrument, par l'intermédiaire de trajectoires d'études conçues spécifiquement par l'expérimentateur. La reproduction des trajectoires avec le robot nécessite de rejeter les perturbations introduites par le contact avec l'instrument. La conception d'un capteur d'effort intégré au robot a permis de satisfaire partiellement cette exigence. Après le détail de la conception de la plateforme robotisée, de sa validation comme un outil d'étude juste et répétable, un exemple d'utilisation est présenté dans le cadre d'une étude sur l'harmonisation des becs de clavecin. L'harmonisation est un processus complexe de réglage de l'instrument, réalisé par le luthier. Un modèle prenant en compte le toucher pendant l'interaction plectre / corde, et intégrant la géométrie du plectre résultant de l'harmonisation, ainsi que des expériences effectuées sur un clavecin, montrent que la forme du plectre affecte non seulement les conditions initiales de vibration des cordes de l'instrument mais aussi le ressenti du claveciniste

    Paradoxes of interactivity: perspectives for media theory, human-computer interaction, and artistic investigations

    Get PDF
    Current findings from anthropology, genetics, prehistory, cognitive and neuroscience indicate that human nature is grounded in a co-evolution of tool use, symbolic communication, social interaction and cultural transmission. Digital information technology has recently entered as a new tool in this co-evolution, and will probably have the strongest impact on shaping the human mind in the near future. A common effort from the humanities, the sciences, art and technology is necessary to understand this ongoing co- evolutionary process. Interactivity is a key for understanding the new relationships formed by humans with social robots as well as interactive environments and wearables underlying this process. Of special importance for understanding interactivity are human-computer and human-robot interaction, as well as media theory and New Media Art. "Paradoxes of Interactivity" brings together reflections on "interactivity" from different theoretical perspectives, the interplay of science and art, and recent technological developments for artistic applications, especially in the realm of sound

    Methodology for the production and delivery of generative music for the personal listener : systems for realtime generative music production

    Get PDF
    This thesis will describe a system for the production of generative music through specific methodology, and provide an approach for the delivery of this material. The system and body of work will be targeted specifically at the personal listening audience. As the largest current consumer of music in all genres of music, this represents the largest and most applicable market to develop such a system for. By considering how recorded media compares to concert performance, it is possible to ascertain which attributes of performance may be translated to a generative media. In addition, an outline of how fixed media has changed how people listen to music directly will be considered. By looking at these concepts an attempt is made to create a system which satisfies societies need for music which is not only commodified and easily approached, but also closes the qualitative gap between a static delivery medium and concert based output. This is approached within the context of contemporary classical music. Furthermore, by considering the development and fragmentation of the personal listening audience through technological developments, a methodology for the delivery of generative media to a range of devices will be investigated. A body of musical work will be created which attempts to realise these goals in a qualitative fashion. These works will span the development of the composition methodology, and the algorithmic methods covered. A conclusion based on the possibilities of each system with regard to its qualitative output will form the basis for evaluation. As this investigation is seated within the field of music, the musical output and composition methodology will be considered as the primary deciding factor of a system's feasibility. The contribution of this research to the field will be a methodology for the composition and production of algorithmic music in realtime, and a feasible method for the delivery of this music to a wide audience
    corecore