2,425 research outputs found

    Finding resilience through music for neurodivergent children

    Get PDF
    This research paper presents a collaborative effort to design a music-making tool that seamlessly blends enjoyment with accessibility, specifically tailored to meet the needs of children with diverse abilities including those who are neurodiverse and have varying musical abilities. The study's primary objective is to provide support to children who encounter challenges in learning traditional musical instruments or who have sensory processing issues and learn their experience of using this tool. Additionally, the research explores the potential role of music therapy in this context, with a focus on how the designed tool can serve as an ideal platform for fostering creativity and self-regulation among children. Qualitative research methods, namely participatory design and cooperative inquiry, were employed to develop and refine different aspects of the music-making tool iteratively. Active involvement and feedback from the primary participants, comprising children with diverse abilities and a music therapist, played a central role throughout the tool's development process. The findings indicate that children responded positively to the technology, revealing diverse applications in music education, therapy, and play. Furthermore, this study identified valuable opportunities for immediate improvements in the robot's design to enhance its overall usability and effectiveness in catering to the needs of its users. The collaborative design approach and the integration of music therapy perspectives demonstrate significant potential for advancing inclusive music education, play and therapeutic interventions for children with diverse abilities

    Musical morphogenesis - a self-organizing system

    Get PDF
    We feel and seize the built environment through senses and body’s interactive movement. During this process, our mind and physical status is processing solutions and methods of integration and adaptation that enable us to integrate and live with and in our surrounding environment. In this paper, we provide an overview on “Musical Morphogenesis” interactive installation, which interacts through colour, light, movement and sound with the environment and its inhabitants. In addition, we intend to take visitors in a sensorial journey to explore the dynamic action of a network of genes during the development of an organism. Finding its roots in the Autopoiesis’ theory (Maturana & Varela 1980), “Musical Morphogenesis” acts and interacts as a self-producing system. This installation results from a multidisciplinary collaboration of six main scientific disciplines: complex systems, computational biology, music, architecture, robotics, and science communication. During the design and implementation of the installation’s components, the specificities of each discipline had to be taken into consideration, resulting in an extremely challenging project.info:eu-repo/semantics/publishedVersio

    Extempore: The design, implementation and application of a cyber-physical programming language

    Get PDF
    There is a long history of experimental and exploratory programming supported by systems that expose interaction through a programming language interface. These live programming systems enable software developers to create, extend, and modify the behaviour of executing software by changing source code without perceptual breaks for recompilation. These live programming systems have taken many forms, but have generally been limited in their ability to express low-level programming concepts and the generation of efficient native machine code. These shortcomings have limited the effectiveness of live programming in domains that require highly efficient numerical processing and explicit memory management. The most general questions addressed by this thesis are what a systems language designed for live programming might look like and how such a language might influence the development of live programming in performance sensitive domains requiring real-time support, direct hardware control, or high performance computing. This thesis answers these questions by exploring the design, implementation and application of Extempore, a new systems programming language, designed specifically for live interactive programming

    between Bound and AbzĂ»

    Get PDF
    UID/EAT/00693/2019Videogames, being an audio-visual media which makes use of presentation and visual techniques mainly linked to cinema, are distinguished due to their focus on interactivity and the relationship between media and user. Interaction is key not only for the image itself but for the music that accompanies it. And the soundtrack of a videogame only exists if there’s an agent that controls the universe, allowing its audition and perception. However, it’s possible to note the convergence between videogames and visual characteristics of films regarding image and what’s present on the screen in the last decade of the mainstream overview – videogames aim to be, in a growing rate, more cinematic. The absence, or reduction of informative elements in the screen, the increased development of graphic quality and design, alongside the notion of spatiality and open environments, are being frequently integrated and invested in by not only big companies but also independent studios. Through two case studies — Bound (Plastic Studios 2016) and AbzĂ» (Giant Squid 2016)—, this paper examines the role of cinematicability and its use as a narrative tool where music builds an ergodic process of communication, meaning and interactivity. The soundtrack, game mechanics and the cinematic compose an interactive musical experience where the user is, at the same time, the interactive and performative agent in the narrative universe.publishersversionpublishe

    What do Collaborations with the Arts Have to Say About Human-Robot Interaction?

    Get PDF
    This is a collection of papers presented at the workshop What Do Collaborations with the Arts Have to Say About HRI , held at the 2010 Human-Robot Interaction Conference, in Osaka, Japan

    Designing Sound for Social Robots: Advancing Professional Practice through Design Principles

    Full text link
    Sound is one of the core modalities social robots can use to communicate with the humans around them in rich, engaging, and effective ways. While a robot's auditory communication happens predominantly through speech, a growing body of work demonstrates the various ways non-verbal robot sound can affect humans, and researchers have begun to formulate design recommendations that encourage using the medium to its full potential. However, formal strategies for successful robot sound design have so far not emerged, current frameworks and principles are largely untested and no effort has been made to survey creative robot sound design practice. In this dissertation, I combine creative practice, expert interviews, and human-robot interaction studies to advance our understanding of how designers can best ideate, create, and implement robot sound. In a first step, I map out a design space that combines established sound design frameworks with insights from interviews with robot sound design experts. I then systematically traverse this space across three robot sound design explorations, investigating (i) the effect of artificial movement sound on how robots are perceived, (ii) the benefits of applying compositional theory to robot sound design, and (iii) the role and potential of spatially distributed robot sound. Finally, I implement the designs from prior chapters into humanoid robot Diamandini, and deploy it as a case study. Based on a synthesis of the data collection and design practice conducted across the thesis, I argue that the creation of robot sound is best guided by four design perspectives: fiction (sound as a means to convey a narrative), composition (sound as its own separate listening experience), plasticity (sound as something that can vary and adapt over time), and space (spatial distribution of sound as a separate communication channel). The conclusion of the thesis presents these four perspectives and proposes eleven design principles across them which are supported by detailed examples. This work contributes an extensive body of design principles, process models, and techniques providing researchers and designers with new tools to enrich the way robots communicate with humans

    Icanlearn: A Mobile Application For Creating Flashcards And Social Stories\u3csup\u3etm\u3c/sup\u3e For Children With Autistm

    Get PDF
    The number of children being diagnosed with Autism Spectrum Disorder (ASD) is on the rise, presenting new challenges for their parents and teachers to overcome. At the same time, mobile computing has been seeping its way into every aspect of our lives in the form of smartphones and tablet computers. It seems only natural to harness the unique medium these devices provide and use it in treatment and intervention for children with autism. This thesis discusses and evaluates iCanLearn, an iOS flashcard app with enough versatility to construct Social StoriesTM. iCanLearn provides an engaging, individualized learning experience to children with autism on a single device, but the most powerful way to use iCanLearn is by connecting two or more devices together in a teacher-learner relationship. The evaluation results are presented at the end of the thesis

    Tangible user interfaces and social interaction in children with autism

    Get PDF
    Tangible User Interfaces (TUIs) offer the potential for new modes of social interaction for children with Autism Spectrum Conditions (ASC). Familiar objects that are embedded with digital technology may help children with autism understand the actions of others by providing feedback that is logical and predictable. Objects that move, playback sound or create sound – thus repeating programmed effects – offer an exciting way for children to investigate objects and their effects. This thesis presents three studies of children with autism interacting with objects augmented with digital technology. Study one looked at Topobo, a construction toy augmented with kinetic memory. Children played with Topobo in groups of three of either Typically Developing (TD) or ASC children. The children were given a construction task, and were also allowed to play with the construction sets with no task. Topobo in the task condition showed an overall significant effect for more onlooker, cooperative, parallel, and less solitary behaviour. For ASC children significantly less solitary and more parallel behaviour was recorded than other play states. In study two, an Augmented Knights Castle (AKC) playset was presented to children with ASC. The task condition was extended to allow children to configure the playset with sound. A significant effect in a small sample was found for configuration of the AKC, leading to less solitary behaviour, and more cooperative behaviour. Compared to non-digital play, the AKC showed reduction of solitary behaviour because of augmentation. Qualitative analysis showed further differences in learning phase, user content, behaviour oriented to other children, and system responsiveness. Tangible musical blocks (‘d-touch’) in study three focused on the task. TD and ASC children were presented with a guided/non-guided task in pairs, to isolate effects of augmentation. Significant effects were found for an increase in cooperative symbolic play in the guided condition, and more solitary functional play was found in the unguided condition. Qualitative analysis highlighted differences in understanding blocks and block representation, exploratory and expressive play, understanding of shared space and understanding of the system. These studies suggest that the structure of the task conducted with TUIs may be an important factor for children’s use. When the task is undefined, play tends to lose structure and the benefits of TUIs decline. Tangible technology needs to be used in an appropriately structured manner with close coupling (the distance between digital housing and digital effect), and works best when objects are presented in familiar form

    Towards a digitally conceived physical performance object

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.Includes bibliographical references (p. 122-126).In the performing arts, the relationship that is established between what is seen and what is heard must be experienced to fully appreciate and understand the aesthetics of performance. Actual physical objects such as musical instruments, lights, elements of the set, props, and people provide the visual associations and a tangible reality which can enhance the musical elements in a performance. This thesis proposes that new and artistic physical objects can, in themselves, be designed to perform. It introduces the Chandelier, a kinetic sculpture, a central set piece for a new opera, a new kind of musical instrument, and an object that performs. The piece moves and changes shape through mechanical action and the designed interplay between surfaces and light. It is intended to be interacted with by musicians and players of the opera. This thesis also explores the design process and evolution of the Chandelier with a primary objective of realizing a constructible, physical performance object through an authentic and abstruse digital conception. It is a conception not of a static nature, but incorporates a dynamic sense of changeable form through coordinated elements of light, mechanics, and sculpture.Steven L. Pliam.S.M
    • 

    corecore