5 research outputs found

    Study on Perception-Action Scheme for Human-Robot Musical Interaction in Wind Instrumental Play

    Get PDF
    制度:新 ; 報告番号:甲3337号 ; 学位の種類:博士(工学) ; 授与年月日:2011/2/25 ; 早大学位記番号:新564

    Effect of a human-teacher vs. a robot-teacher on human learning a pilot study

    Get PDF
    Studies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the participants learned a simple pick-and-place block task via video of a human-teacher and a group where the participants learned the same pick-and-place block task via video from a robotic-teacher. After being the taught the task, the participants performed a 15-minute distracter task and then were timed in their reconstruction of the block configuration. An exit survey asking about their level of comfort learning from robot and computer entities was given upon completion. Results showed that there was no significant difference in the rebuild scores of the two groups, but there was a marginally significant difference in the rebuild times of the two groups. Exit survey results, research implications, and future work are discussed

    Robotics in Germany and Japan

    Get PDF
    This book comprehends an intercultural and interdisciplinary framework including current research fields like Roboethics, Hermeneutics of Technologies, Technology Assessment, Robotics in Japanese Popular Culture and Music Robots. Contributions on cultural interrelations, technical visions and essays are rounding out the content of this book

    Examining Cognitive Empathy Elements within AI Chatbots for Healthcare Systems

    Get PDF
    Empathy is an essential part of communication in healthcare. It is a multidimensional concept and the two key dimensions: emotional and cognitive empathy allow clinicians to understand a patient’s situation, reasoning, and feelings clearly (Mercer and Reynolds, 2002). As artificial intelligence (AI) is increasingly being used in healthcare for many routine tasks, accurate diagnoses, and complex treatment plans, it is becoming more crucial to incorporate clinical empathy into patient-faced AI systems. Unless patients perceive that the AI is understanding their situation, the communication between patient and AI may not sustain efficiently. AI may not really exhibit any emotional empathy at present, but it has the capability to exhibit cognitive empathy by communicating how it can understand patients’ reasoning, perspectives, and point of view. In my dissertation, I examine this issue across three separate lab experiments and one interview study. At first, I developed AI Cognitive Empathy Scale (AICES) and tested all empathy (emotional and cognitive) components together in a simulated scenario against control for patient-AI interaction for diagnosis purposes. In the second experiment, I tested the empathy components separately against control in different simulated scenarios. I identified six cognitive empathy elements from the interview study with first-time mothers, two of these elements were unique from the past literature. In the final lab experiment, I tested different cognitive empathy components separately based on the results from the interview study in simulated scenarios to examine which element emerges as the most effective. Finally, I developed a conceptual model of cognitive empathy for patient-AI interaction connecting the past literature and the observations from my studies. Overall, cognitive empathy elements show promise to create a shared understanding in patients-AI communication that may lead to increased patient satisfaction and willingness to use AI systems for initial diagnosis purposes

    Physical modelling meets machine learning: performing music with a virtual string ensemble

    Get PDF
    This dissertation describes a new method of computer performance of bowed string instruments (violin, viola, cello) using physical simulations and intelligent feedback control. Computer synthesis of music performed by bowed string instruments is a challenging problem. Unlike instruments whose notes originate with a single discrete excitation (e.g., piano, guitar, drum), bowed string instruments are controlled with a continuous stream of excitations (i.e. the bow scraping against the string). Most existing synthesis methods utilize recorded audio samples, which perform quite well for single-excitation instruments but not continuous-excitation instruments. This work improves the realism of synthesis of violin, viola, and cello sound by generating audio through modelling the physical behaviour of the instruments. A string's wave equation is decomposed into 40 modes of vibration, which can be acted upon by three forms of external force: A bow scraping against the string, a left-hand finger pressing down, and/or a right-hand finger plucking. The vibration of each string exerts force against the instrument bridge; these forces are summed and convolved with the instrument body impulse response to create the final audio output. In addition, right-hand haptic output is created from the force of the bow against the string. Physical constants from ten real instruments (five violins, two violas, and three cellos) were measured and used in these simulations. The physical modelling was implemented in a high-performance library capable of simulating audio on a desktop computer one hundred times faster than real-time. The program also generates animated video of the instruments being performed. To perform music with the physical models, a virtual musician interprets the musical score and generates actions which are then fed into the physical model. The resulting audio and haptic signals are examined with a support vector machine, which adjusts the bow force in order to establish and maintain a good timbre. This intelligent feedback control is trained with human input, but after the initial training is completed the virtual musician performs autonomously. A PID controller is used to adjust the position of the left-hand finger to correct any flaws in the pitch. Some performance parameters (initial bow force, force correction, and lifting factors) require an initial value for each string and musical dynamic; these are calibrated automatically using the previously-trained support vector machines. The timbre judgements are retained after each performance and are used to pre-emptively adjust bowing parameters to avoid or mitigate problematic timbre for future performances of the same music. The system is capable of playing sheet music with approximately the same ability level as a human music student after two years of training. Due to the number of instruments measured and the generality of the machine learning, music can be performed with ensembles of up to ten stringed instruments, each with a distinct timbre. This provides a baseline for future work in computer control and expressive music performance of virtual bowed string instruments
    corecore