1,063 research outputs found
Musical Robots For Children With ASD Using A Client-Server Architecture
Presented at the 22nd International Conference on Auditory Display (ICAD-2016)People with Autistic Spectrum Disorders (ASD) are known to have difficulty recognizing and expressing emotions, which affects their social integration. Leveraging the recent advances in interactive robot and music therapy approaches, and integrating both, we have designed musical robots that can facilitate social and emotional interactions of children with ASD. Robots communicate with children with ASD while detecting their emotional states and physical activities and then, make real-time sonification based on the interaction data. Given that we envision the use of multiple robots with children, we have adopted a client-server architecture. Each robot and sensing device plays a role as a terminal, while the sonification server processes all the data and generates harmonized sonification. After describing our goals for the use of sonification, we detail the system architecture and on-going research scenarios. We believe that the present paper offers a new perspective on the sonification application for assistive technologies
Tele-media-art: web-based inclusive teaching of body expression
Conferência Internacional, realizada em Olhão, Algarve, de 26-28 de abril de 2018.The Tele-Media-Art project aims to promote the improvement of the online distance learning and artistic teaching process applied in the teaching of two test scenarios, doctorate in digital art-media and the lifelong learning course ”the experience of diversity” by exploiting multimodal telepresence facilities encompassing the diversified visual, auditory and sensory channels, as well as rich forms of gestural / body interaction. To this end, a telepresence system was developed to be installed at Palácio Ceia, in Lisbon, Portugal, headquarters of the Portuguese Open University, from which methodologies of artistic teaching in mixed regime - face-to-face and online distance - that are inclusive to blind and partially sighted students. This system has already been tested against a group of subjects, including blind people. Although positive results were achieved, more development and further tests will be carried in the futureThis project was financed by Calouste Gulbenkian Foundation under Grant number 142793.info:eu-repo/semantics/publishedVersio
Safe and Sound: Proceedings of the 27th Annual International Conference on Auditory Display
Complete proceedings of the 27th International Conference on Auditory Display (ICAD2022), June 24-27. Online virtual conference
A Networked Hybrid Interface for Audience Sonification and Machine Learning
Lick the Toad is an ongoing project developed as a web based interface that runs in modern browsers. It provides a custom made platform to collect user data accessed from mobile devices, such as smartphones, tablets etc. The system offers a tool for interactive collective sonification supporting networked music performance. It can be used in various contexts, such as an onsite installation, or for the distribution of raw data for live coding performances making it a versatile component for an array of creative practices. Of these, live coding which is one of the author's artistic approach to create live performances is demonstrated in this article highlighting and elaborating on technical and musical aspects of this approach. Final sections outline the system as a tool for live coding performances and cover a series of potential interactions integrating audience and/or using it independently alike
SONEX: An Evaluation Exchange Framework for Reproducible Sonification
Degara N, Nagel F, Hermann T. SONEX: An Evaluation Exchange Framework for Reproducible Sonification. In: Strumiłło P, Bujacz M, Popielata M, eds. Proceedings of the 19th International Conference on Auditory Displays. Lodz, Poland: Lodz University of Technology Press; 2013: 167-174.After 18 ICAD conferences, Auditory Display has become a mature research community. However, a robust evaluation and scientific comparison of sonification methods is often neglected by auditory display researchers. In the last ICAD 2012 conference, only one paper out of 53 makes a statistical comparison of several sonification methods and still no comparison with other state-of-the-art algorithms is provided. In this paper, we review profitable standards in other communities and transfer them to derive recommendations and best practices for auditory display research. We describe SonEX (Sonification Evaluation eXchange), a community-based framework for the formal evaluation of sonification methods. The goals, challenges and architecture of this evaluation platform are discussed. In addition, a simple example of a task definition according to the guidelines of SonEX is also introduced. This paper aims at starting a vivid discussion towards the establishment of thorough scientific methodologies for auditory display research and the definition of standardized sonification tasks
Sonification of Network Traffic Flow for Monitoring and Situational Awareness
Maintaining situational awareness of what is happening within a network is
challenging, not least because the behaviour happens within computers and
communications networks, but also because data traffic speeds and volumes are
beyond human ability to process. Visualisation is widely used to present
information about the dynamics of network traffic dynamics. Although it
provides operators with an overall view and specific information about
particular traffic or attacks on the network, it often fails to represent the
events in an understandable way. Visualisations require visual attention and so
are not well suited to continuous monitoring scenarios in which network
administrators must carry out other tasks. Situational awareness is critical
and essential for decision-making in the domain of computer network monitoring
where it is vital to be able to identify and recognize network environment
behaviours.Here we present SoNSTAR (Sonification of Networks for SiTuational
AwaReness), a real-time sonification system to be used in the monitoring of
computer networks to support the situational awareness of network
administrators. SoNSTAR provides an auditory representation of all the TCP/IP
protocol traffic within a network based on the different traffic flows between
between network hosts. SoNSTAR raises situational awareness levels for computer
network defence by allowing operators to achieve better understanding and
performance while imposing less workload compared to visual techniques. SoNSTAR
identifies the features of network traffic flows by inspecting the status flags
of TCP/IP packet headers and mapping traffic events to recorded sounds to
generate a soundscape representing the real-time status of the network traffic
environment. Listening to the soundscape allows the administrator to recognise
anomalous behaviour quickly and without having to continuously watch a computer
screen.Comment: 17 pages, 7 figures plus supplemental material in Github repositor
- …