40 research outputs found

    Designing Sound for Social Robots: Advancing Professional Practice through Design Principles

    Full text link
    Sound is one of the core modalities social robots can use to communicate with the humans around them in rich, engaging, and effective ways. While a robot's auditory communication happens predominantly through speech, a growing body of work demonstrates the various ways non-verbal robot sound can affect humans, and researchers have begun to formulate design recommendations that encourage using the medium to its full potential. However, formal strategies for successful robot sound design have so far not emerged, current frameworks and principles are largely untested and no effort has been made to survey creative robot sound design practice. In this dissertation, I combine creative practice, expert interviews, and human-robot interaction studies to advance our understanding of how designers can best ideate, create, and implement robot sound. In a first step, I map out a design space that combines established sound design frameworks with insights from interviews with robot sound design experts. I then systematically traverse this space across three robot sound design explorations, investigating (i) the effect of artificial movement sound on how robots are perceived, (ii) the benefits of applying compositional theory to robot sound design, and (iii) the role and potential of spatially distributed robot sound. Finally, I implement the designs from prior chapters into humanoid robot Diamandini, and deploy it as a case study. Based on a synthesis of the data collection and design practice conducted across the thesis, I argue that the creation of robot sound is best guided by four design perspectives: fiction (sound as a means to convey a narrative), composition (sound as its own separate listening experience), plasticity (sound as something that can vary and adapt over time), and space (spatial distribution of sound as a separate communication channel). The conclusion of the thesis presents these four perspectives and proposes eleven design principles across them which are supported by detailed examples. This work contributes an extensive body of design principles, process models, and techniques providing researchers and designers with new tools to enrich the way robots communicate with humans

    Analysis and enhancement of interpersonal coordination using inertial measurement unit solutions

    Get PDF
    Die heutigen mobilen Kommunikationstechnologien haben den Umfang der verbalen und textbasierten Kommunikation mit anderen Menschen, sozialen Robotern und künstlicher Intelligenz erhöht. Auf der anderen Seite reduzieren diese Technologien die nonverbale und die direkte persönliche Kommunikation, was zu einer gesellschaftlichen Thematik geworden ist, weil die Verringerung der direkten persönlichen Interaktionen eine angemessene Wahrnehmung sozialer und umgebungsbedingter Reizmuster erschweren und die Entwicklung allgemeiner sozialer Fähigkeiten bremsen könnte. Wissenschaftler haben aktuell die Bedeutung nonverbaler zwischenmenschlicher Aktivitäten als soziale Fähigkeiten untersucht, indem sie menschliche Verhaltensmuster in Zusammenhang mit den jeweilgen neurophysiologischen Aktivierungsmustern analzsiert haben. Solche Querschnittsansätze werden auch im Forschungsprojekt der Europäischen Union "Socializing sensori-motor contingencies" (socSMCs) verfolgt, das darauf abzielt, die Leistungsfähigkeit sozialer Roboter zu verbessern und Autismus-Spektrumsstörungen (ASD) adäquat zu behandeln. In diesem Zusammenhang ist die Modellierung und das Benchmarking des Sozialverhaltens gesunder Menschen eine Grundlage für theorieorientierte und experimentelle Studien zum weiterführenden Verständnis und zur Unterstützung interpersoneller Koordination. In diesem Zusammenhang wurden zwei verschiedene empirische Kategorien in Abhängigkeit von der Entfernung der Interagierenden zueinander vorgeschlagen: distale vs. proximale Interaktionssettings, da sich die Struktur der beteiligten kognitiven Systeme zwischen den Kategorien ändert und sich die Ebene der erwachsenden socSMCs verschiebt. Da diese Dissertation im Rahmen des socSMCs-Projekts entstanden ist, wurden Interaktionssettings für beide Kategorien (distal und proximal) entwickelt. Zudem wurden Ein-Sensor-Lösungen zur Reduzierung des Messaufwands (und auch der Kosten) entwickelt, um eine Messung ausgesuchter Verhaltensparameter bei einer Vielzahl von Menschen und sozialen Interaktionen zu ermöglichen. Zunächst wurden Algorithmen für eine kopfgetragene Trägheitsmesseinheit (H-IMU) zur Messung der menschlichen Kinematik als eine Ein-Sensor-Lösung entwickelt. Die Ergebnisse bestätigten, dass die H-IMU die eigenen Gangparameter unabhängig voneinander allein auf Basis der Kopfkinematik messen kann. Zweitens wurden—als ein distales socSMC-Setting—die interpersonellen Kopplungen mit einem Bezug auf drei interagierende Merkmale von „Übereinstimmung“ (engl.: rapport) behandelt: Positivität, gegenseitige Aufmerksamkeit und Koordination. Die H-IMUs überwachten bestimmte soziale Verhaltensereignisse, die sich auf die Kinematik der Kopforientierung und Oszillation während des Gehens und Sprechens stützen, so dass der Grad der Übereinstimmung geschätzt werden konnte. Schließlich belegten die Ergebnisse einer experimentellen Studie, die zu einer kollaborativen Aufgabe mit der entwickelten IMU-basierten Tablet-Anwendung durchgeführt wurde, unterschiedliche Wirkungen verschiedener audio-motorischer Feedbackformen für eine Unterstützung der interpersonellen Koordination in der Kategorie proximaler sensomotorischer Kontingenzen. Diese Dissertation hat einen intensiven interdisziplinären Charakter: Technologische Anforderungen in den Bereichen der Sensortechnologie und der Softwareentwicklung mussten in direktem Bezug auf vordefinierte verhaltenswissenschaftliche Fragestellungen entwickelt und angewendet bzw. gelöst werden—und dies in zwei unterschiedlichen Domänen (distal, proximal). Der gegebene Bezugsrahmen wurde als eine große Herausforderung bei der Entwicklung der beschriebenen Methoden und Settings wahrgenommen. Die vorgeschlagenen IMU-basierten Lösungen könnten dank der weit verbreiteten IMU-basierten mobilen Geräte zukünftig in verschiedene Anwendungen perspektiv reich integriert werden.Today’s mobile communication technologies have increased verbal and text-based communication with other humans, social robots and intelligent virtual assistants. On the other hand, the technologies reduce face-to-face communication. This social issue is critical because decreasing direct interactions may cause difficulty in reading social and environmental cues, thereby impeding the development of overall social skills. Recently, scientists have studied the importance of nonverbal interpersonal activities to social skills, by measuring human behavioral and neurophysiological patterns. These interdisciplinary approaches are in line with the European Union research project, “Socializing sensorimotor contingencies” (socSMCs), which aims to improve the capability of social robots and properly deal with autism spectrum disorder (ASD). Therefore, modelling and benchmarking healthy humans’ social behavior are fundamental to establish a foundation for research on emergence and enhancement of interpersonal coordination. In this research project, two different experimental settings were categorized depending on interactants’ distance: distal and proximal settings, where the structure of engaged cognitive systems changes, and the level of socSMCs differs. As a part of the project, this dissertation work referred to this spatial framework. Additionally, single-sensor solutions were developed to reduce costs and efforts in measuring human behaviors, recognizing the social behaviors, and enhancing interpersonal coordination. First of all, algorithms using a head worn inertial measurement unit (H-IMU) were developed to measure human kinematics, as a baseline for social behaviors. The results confirmed that the H-IMU can measure individual gait parameters by analyzing only head kinematics. Secondly, as a distal sensorimotor contingency, interpersonal relationship was considered with respect to a dynamic structure of three interacting components: positivity, mutual attentiveness, and coordination. The H-IMUs monitored the social behavioral events relying on kinematics of the head orientation and oscillation during walk and talk, which can contribute to estimate the level of rapport. Finally, in a new collaborative task with the proposed IMU-based tablet application, results verified effects of different auditory-motor feedbacks on the enhancement of interpersonal coordination in a proximal setting. This dissertation has an intensive interdisciplinary character: Technological development, in the areas of sensor and software engineering, was required to apply to or solve issues in direct relation to predefined behavioral scientific questions in two different settings (distal and proximal). The given frame served as a reference in the development of the methods and settings in this dissertation. The proposed IMU-based solutions are also promising for various future applications due to widespread wearable devices with IMUs.European Commission/HORIZON2020-FETPROACT-2014/641321/E

    BendableSound: An Elastic Multisensory Surface Using Touch-based interactions to Assist Children with Severe Autism During Music Therapy

    Get PDF
    Neurological Music Therapy uses live music to improve the sensorimotor regulation of children with severe autism. However, they often lack musical training and their impairments limit their interactions with musical instruments. In this paper, we present our co-design work that led to the BendableSound prototype: an elastic multisensory surface encouraging users to practice coordination movements when touching a fabric to play sounds. We present the results of a formative study conducted with 18 teachers showing BendableSound was perceived as “usable” and “attractive”. Then, we present a deployment study with 24 children with severe autism showing BendableSound is “easy to use” and may potentially have therapeutic benefits regarding attention and motor development. We propose a set of design insights that could guide the design of natural user interfaces, particularly elastic multisensory surfaces. We close with a discussion and directions for future work

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoP’s dancer sonification system is to “sonify the motion and emotion” of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described

    Empathic Effects of Auditory Heartbeats: A Neurophysiological Investigation

    Get PDF
    I hypothesized that hearing the heartbeat of another person would affect listeners’ empathic state, and designed an experiment to measure changes in behavior and cardiac neurophysiology. In my experiment, participants (N = 27) completed modified versions of the Reading the Mind in the Eyes Task (RMET) in different auditory heartbeat conditions (slow, fast, silence, audio-only). For each trial, participants completed two measures of empathic state: cognitive (“What is this person feeling?”) and affective (“How well could you feel what they were feeling?”). From my results, I found that the presence of auditory heartbeats i) changed cognitive empathy and ii) increased affective empathy, and these responses depended on the heartbeat tempo. I also analyzed two markers of cardiac neurophysiology: i) Heart Rate (HR) and ii) the Heartbeat-Evoked Potential (HEP). I found that the auditory heartbeat decreased listeners’ HR, and there were additional effects due to tempo and affective empathy. Finally, a frontal component of the HEP was more negative in the time-range of 350-500ms, which I attribute to a decrease in cardiac attention (i.e. “interoception”) when listening empathically to the heartbeat of others.Ph.D

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data

    Development and evaluation of a haptic framework supporting telerehabilitation robotics and group interaction

    Get PDF
    Telerehabilitation robotics has grown remarkably in the past few years. It can provide intensive training to people with special needs remotely while facilitating therapists to observe the whole process. Telerehabilitation robotics is a promising solution supporting routine care which can help to transform face-to-face and one-on-one treatment sessions that require not only intensive human resource but are also restricted to some specialised care centres to treatments that are technology-based (less human involvement) and easy to access remotely from anywhere. However, there are some limitations such as network latency, jitter, and delay of the internet that can affect negatively user experience and quality of the treatment session. Moreover, the lack of social interaction since all treatments are performed over the internet can reduce motivation of the patients. As a result, these limitations are making it very difficult to deliver an efficient recovery plan. This thesis developed and evaluated a new framework designed to facilitate telerehabilitation robotics. The framework integrates multiple cutting-edge technologies to generate playful activities that involve group interaction with binaural audio, visual, and haptic feedback with robot interaction in a variety of environments. The research questions asked were: 1) Can activity mediated by technology motivate and influence the behaviour of users, so that they engage in the activity and sustain a good level of motivation? 2) Will working as a group enhance users’ motivation and interaction? 3) Can we transfer real life activity involving group interaction to virtual domain and deliver it reliably via the internet? There were three goals in this work: first was to compare people’s behaviours and motivations while doing the task in a group and on their own; second was to determine whether group interaction in virtual and reala environments was different from each other in terms of performance, engagement and strategy to complete the task; finally was to test out the effectiveness of the framework based on the benchmarks generated from socially assistive robotics literature. Three studies have been conducted to achieve the first goal, two with healthy participants and one with seven autistic children. The first study observed how people react in a challenging group task while the other two studies compared group and individual interactions. The results obtained from these studies showed that the group interactions were more enjoyable than individual interactions and most likely had more positive effects in terms of user behaviours. This suggests that the group interaction approach has the potential to motivate individuals to make more movements and be more active and could be applied in the future for more serious therapy. Another study has been conducted to measure group interaction’s performance in virtual and real environments and pointed out which aspect influences users’ strategy for dealing with the task. The results from this study helped to form a better understanding to predict a user’s behaviour in a collaborative task. A simulation has been run to compare the results generated from the predictor and the real data. It has shown that, with an appropriate training method, the predictor can perform very well. This thesis has demonstrated the feasibility of group interaction via the internet using robotic technology which could be beneficial for people who require social interaction (e.g. stroke patients and autistic children) in their treatments without regular visits to the clinical centres

    Proceedings of the 6th international conference on disability, virtual reality and associated technologies (ICDVRAT 2006)

    Get PDF
    The proceedings of the conferenc
    corecore