264 research outputs found

    Perceptual sound field synthesis concept for music presentation

    Get PDF
    A perceptual sound field synthesis approach for music is presented. Its signal processing implements critical bands, the precedence effect and integration times of the auditory system by technical means, as well as the radiation characteristics of musical instruments. Furthermore, interaural coherence, masking and auditory scene analysis principles are considered. As a result, the conceptualized sound field synthesis system creates a natural, spatial sound impression for listeners in extended listening area, even with a low number of loudspeakers. A novel technique, the “precedence fade”, as well as the interaural cues provided by the sound field synthesis approach, allow for a precise and robust localization.Simulations and a listening test provide a proof of concept. The method is particularly robust for signals with impulsive attacks and long quasi-stationary phases, as in the case of many instrumental sounds. It is compatible with many loudspeaker setups, such as 5.1 to 22.2, ambisonics systems and loudspeaker arrays for wave front synthesis. The perceptual sound field synthesis approach is an alternative to physically centered wave field synthesis concepts and conventional, perceptually motivated stereophonic sound and benefits from both paradigms

    Multi-Listener Auditory Displays

    Get PDF
    This thesis investigates how team working principles can be applied to Auditory Displays (AD). During this work it was established that there the level of collaboration and team work within the AD community was low and that this community would benefit from a enhanced collaborative approach. The increased use of collaborative techniques will benefit the AD community by increasing quality, knowledge transfer, synergy, and enhancing innovation. The reader is introduced to a novel approach to collaborative AD entitled Multi-listener Auditory Displays (MLAD). This work focused upon two areas of MLAD distributed AD teams and virtual AD teams. A distributed AD team is a team of participants who work upon a common task at different times and locations. The distributed approach was found to work effectively when designing ADs that work upon large scale data sets such as that found in big data. A virtual AD team is a group of participants who work upon a common task simultaneously and in separate locations. A virtual AD team is assisted by computer technology such as video conferencing and email. The virtual auditory display team was found to work well by enabling a team to work more effectively together who were geographically spread. Two pilot studies are included; SonicSETI is an example of a distributed AD team, where a remote group of listeners have background white noise playing, and use passive listening to detect anomalous candidate signals; and a geographically diverse virtual AD team that collaborates through electronic technology on an auditory display which sonifies a database of red wine measurements. A workshop was organised at a conference which focused upon ensemble auditory displays with a group of participants who were co- located

    Safe and Sound: Proceedings of the 27th Annual International Conference on Auditory Display

    Get PDF
    Complete proceedings of the 27th International Conference on Auditory Display (ICAD2022), June 24-27. Online virtual conference

    Gait sonification for rehabilitation: adjusting gait patterns by acoustic transformation of kinematic data

    Get PDF
    To enhance motor learning in both sport and rehabilitation, auditory feedback has emerged as an effective tool. Since it requires less attention than visual feedback and hardly affects the visually dominated orientation in space, it can be used safely and effectively in natural locomotion such as walking. One method for generating acoustic movement feedback is the direct mapping of kinematic data to sound (movement sonification). Using this method in orthopedic gait rehabilitation could make an important contribution to the prevention of falls and secondary diseases. This would not only reduce the individual suffering of the patients, but also medical treatment costs. To determine the possible applications of movement sonification in gait rehabilitation in the context of this work, a new gait sonification method based on inertial sensor technology was developed. Against the background of current scientific findings on sensorimotor function, feedback methods, and gait analysis, three studies published in scientific journals are presented in this thesis: The first study shows the applicability and acceptance of the feedback method in patients undergoing inpatient rehabilitation after unilateral total hip arthroplasty. In addition, the direct effect of gait sonification during ten gait training sessions on the patients’ gait pattern was revealed. In the second study, the immediate follow-up effect of gait sonification on the kinematics of the same patient group is examined at four measurement points after gait training. In this context, a significant influence of sonification on the gait pattern of the patients was shown, which, however, did not meet the previously expected effects. In view of this finding, the effect of the specific sound parameter loudness of gait sonification on the gait of healthy persons was analyzed in a third study. Thus, an impact of asymmetric loudness of gait sonification on the ground contact time could be detected. Considering this cause-effect relationship can be a component in improving gait sonfication in rehabilitation. Overall, the feasibility and effectiveness of movement sonification in gait rehabilitation of patients after unilateral hip arthroplasty becomes evident. The findings thus illustrate the potential of the method to efficiently support orthopedic gait rehabilitation in the future. On the basis of the results presented, this potential can be exploited in particular by an adequate mapping of movement to sound, a systematic modification of selected sound parameters, and a target-group-specific selection of the gait sonification mode. In addition to a detailed investigation of the three factors mentioned above, an optimization and refinement of gait analysis in patients after arthroplasty using inertial sensor technology will be beneficial in the future.Akustisches Feedback kann wirkungsvoll eingesetzt werden, um das Bewegungslernen sowohl im Sport als auch in der Rehabilitation zu erleichtern. Da es weniger Aufmerksamkeit als visuelles Feedback erfordert und die visuell dominierte Orientierung im Raum kaum beeinträchtigt, kann es während einer natürlichen Fortbewegung wie dem Gehen sicher und effektiv genutzt werden. Eine Methode zur Generierung akustischen Bewegungsfeedbacks ist die direkte Abbildung kinematischer Daten auf Sound (Bewegungssonifikation). Ein Einsatz dieser Methode in der orthopädischen Gangrehabilitation könnte einen wichtigen Beitrag zur Prävention von Stürzen und Folgeerkrankungen leisten. Neben dem individuellen Leid der Patienten ließen sich so auch medizinische Behandlungskosten erheblich reduzieren. Um im Rahmen dieser Arbeit die Einsatzmöglichkeiten der Bewegungssonifikation in der Gangrehabilitation zu bestimmen, wurde eine neue Gangsonifikationsmethodik auf Basis von Inertialsensorik entwickelt. Zu der entwickelten Methodik werden, vor dem Hintergrund aktueller wissenschaftlicher Erkenntnisse zur Sensomotorik, zu Feedbackmethoden und zur Ganganalyse, in dieser Thesis drei in Fachzeitschriften publizierte Studien vorgestellt. Die erste Studie beschreibt die Anwendbarkeit und Akzeptanz der Feedbackmethode bei Patienten in stationärer Rehabilitation nach unilateraler Hüftendoprothetik. Darüber hinaus wird der direkte Effekt der Gangsonifikation während eines zehnmaligen Gangtrainings auf das Gangmuster der Patienten deutlich. In der zweiten Studie wird der unmittelbare Nacheffekt der Gangsonifikation auf die Kinematik der gleichen Patientengruppe zu vier Messzeitpunkten nach dem Gangtraining untersucht. In diesem Zusammenhang zeigte sich ein signifikanter Einfluss der Sonifikation auf das Gangbild der Patienten, der allerdings nicht den zuvor erwarteten Effekten entsprach. Aufgrund dieses Ergebnisses wurde in einer dritten Studie die Wirkung des spezifischen Klangparameters Lautstärke der Gangsonifikation auf das Gangbild von gesunden Personen analysiert. Dabei konnte ein Einfluss von asymmetrischer Lautstärke der Gangsonifikation auf die Bodenkontaktzeit nachgewiesen werden. Die Berücksichtigung dieses Ursache-Wirkungs-Zusammenhangs kann einen Baustein bei der Verbesserung der Gangsonifikation in der Rehabilitation darstellen. Insgesamt wird die Anwendbarkeit und Wirksamkeit von Bewegungssonifikation in der Gangrehabilitation bei Patienten nach unilateraler Hüftendoprothetik evident. Die gewonnenen Erkenntnisse verdeutlichen das Potential der Methode, die orthopädische Gangrehabilitation zukünftig effizient zu unterstützen. Ausschöpfen lässt sich dieses Potential auf Grundlage der vorgestellten Ergebnisse insbesondere anhand einer adäquaten Zuordnung von Bewegung zu Sound, einer systematischen Modifikation ausgewählter Soundparameter sowie einer zielgruppenspezifischen Wahl des Modus der Sonifikation. Neben einer differenzierten Untersuchung der genannten Faktoren, erscheint zukünftig eine Optimierung und Verfeinerung der Ganganalyse bei Patienten nach Endoprothetik unter Einsatz von Inertialsensorik notwendig

    Spatial auditory display for acoustics and music collections

    Get PDF
    PhDThis thesis explores how audio can be better incorporated into how people access information and does so by developing approaches for creating three-dimensional audio environments with low processing demands. This is done by investigating three research questions. Mobile applications have processor and memory requirements that restrict the number of concurrent static or moving sound sources that can be rendered with binaural audio. Is there a more e cient approach that is as perceptually accurate as the traditional method? This thesis concludes that virtual Ambisonics is an ef cient and accurate means to render a binaural auditory display consisting of noise signals placed on the horizontal plane without head tracking. Virtual Ambisonics is then more e cient than convolution of HRTFs if more than two sound sources are concurrently rendered or if movement of the sources or head tracking is implemented. Complex acoustics models require signi cant amounts of memory and processing. If the memory and processor loads for a model are too large for a particular device, that model cannot be interactive in real-time. What steps can be taken to allow a complex room model to be interactive by using less memory and decreasing the computational load? This thesis presents a new reverberation model based on hybrid reverberation which uses a collection of B-format IRs. A new metric for determining the mixing time of a room is developed and interpolation between early re ections is investigated. Though hybrid reverberation typically uses a recursive lter such as a FDN for the late reverberation, an average late reverberation tail is instead synthesised for convolution reverberation. Commercial interfaces for music search and discovery use little aural information even though the information being sought is audio. How can audio be used in interfaces for music search and discovery? This thesis looks at 20 interfaces and determines that several themes emerge from past interfaces. These include using a two or three-dimensional space to explore a music collection, allowing concurrent playback of multiple sources, and tools such as auras to control how much information is presented. A new interface, the amblr, is developed because virtual two-dimensional spaces populated by music have been a common approach, but not yet a perfected one. The amblr is also interpreted as an art installation which was visited by approximately 1000 people over 5 days. The installation maps the virtual space created by the amblr to a physical space

    Taux : a system for evaluating sound feedback in navigational tasks

    Get PDF
    This thesis presents the design and development of an evaluation system for generating audio displays that provide feedback to persons performing navigation tasks. It first develops the need for such a system by describing existing wayfinding solutions, investigating new electronic location-based methods that have the potential of changing these solutions and examining research conducted on relevant audio information representation techniques. An evaluation system that supports the manipulation of two basic classes of audio display is then described. Based on prior work on wayfinding with audio display, research questions are developed that investigate the viability of different audio displays. These are used to generate hypotheses and develop an experiment which evaluates four variations of audio display for wayfinding. Questions are also formulated that evaluate a baseline condition that utilizes visual feedback. An experiment which tests these hypotheses on sighted users is then described. Results from the experiment suggest that spatial audio combined with spoken hints is the best approach of the approaches comparing spatial audio. The test experiment results also suggest that muting a varying audio signal when a subject is on course did not improve performance. The system and method are then refined. A second experiment is conducted with improved displays and an improved experiment methodology. After adding blindfolds for sighted subjects and increasing the difficulty of navigation tasks by reducing the arrival radius, similar comparisons were observed. Overall, the two experiments demonstrate the viability of the prototyping tool for testing and refining multiple different audio display combinations for navigational tasks. The detailed contributions of this work and future research opportunities conclude this thesis

    Developing an interactive overview for non-visual exploration of tabular numerical information

    Get PDF
    This thesis investigates the problem of obtaining overview information from complex tabular numerical data sets non-visually. Blind and visually impaired people need to access and analyse numerical data, both in education and in professional occupations. Obtaining an overview is a necessary first step in data analysis, for which current non-visual data accessibility methods offer little support. This thesis describes a new interactive parametric sonification technique called High-Density Sonification (HDS), which facilitates the process of extracting overview information from the data easily and efficiently by rendering multiple data points as single auditory events. Beyond obtaining an overview of the data, experimental studies showed that the capabilities of human auditory perception and cognition to extract meaning from HDS representations could be used to reliably estimate relative arithmetic mean values within large tabular data sets. Following a user-centred design methodology, HDS was implemented as the primary form of overview information display in a multimodal interface called TableVis. This interface supports the active process of interactive data exploration non-visually, making use of proprioception to maintain contextual information during exploration (non-visual focus+context), vibrotactile data annotations (EMA-Tactons) that can be used as external memory aids to prevent high mental workload levels, and speech synthesis to access detailed information on demand. A series of empirical studies was conducted to quantify the performance attained in the exploration of tabular data sets for overview information using TableVis. This was done by comparing HDS with the main current non-visual accessibility technique (speech synthesis), and by quantifying the effect of different sizes of data sets on user performance, which showed that HDS resulted in better performance than speech, and that this performance was not heavily dependent on the size of the data set. In addition, levels of subjective workload during exploration tasks using TableVis were investigated, resulting in the proposal of EMA-Tactons, vibrotactile annotations that the user can add to the data in order to prevent working memory saturation in the most demanding data exploration scenarios. An experimental evaluation found that EMA-Tactons significantly reduced mental workload in data exploration tasks. Thus, the work described in this thesis provides a basis for the interactive non-visual exploration of a broad range of sizes of numerical data tables by offering techniques to extract overview information quickly, performing perceptual estimations of data descriptors (relative arithmetic mean) and managing demands on mental workload through vibrotactile data annotations, while seamlessly linking with explorations at different levels of detail and preserving spatial data representation metaphors to support collaboration with sighted users

    Creating a real-time movement sonification system for hemiparetic upper limb rehabilitation for survivors of stroke

    Get PDF
    Upper limb paresis is a common problem for survivors of stroke, impeding their ability to live independently, and rehabilitation interventions to reduce impairment are highly sought after. The use of audio-based interventions, such as movement sonification, may improve rehabilitation outcomes in this application, however, they are relatively unexplored considering the potential that audio feedback has to enhance motor skill learning. Movement sonification is the process of converting movement associated data to the auditory domain and is touted to be a feasible and effective method for stroke survivors to obtain real-time audio feedback of their movements. To generate real-time audio feedback through movement sonification, a system is required to capture movements, process data, extract the physical domain of interest, convert to the auditory domain, and emit the generated audio. A commercial system that performs this process for gross upper limb movements is currently unavailable, therefore, system creation is required. To begin this process, a mapping review of movement sonification systems in the literature was completed. System components in the literature were identified, keyword coded, and grouped, to provide an overview of the components used within these systems. From these results, choices for components of new movement sonification systems were made based on the popularity and applicability, to create two movement sonification systems, one termed ‘Soniccup’, which uses an Inertial Measurement Unit, and the other termed ‘KinectSon’ which uses an Azure Kinect camera. Both systems were setup to translate position estimates into audio pitch, as an output of the sonification process. Both systems were subsequently used in a comparison study with a Vicon Nexus system to establish similarity of positional shape, and therefore establish audio output similarity. The results indicate that the Soniccup produced positional shape representative of the movement performed, for movements of duration under one second, but performance degraded as the movement duration increased. In addition, the Soniccup produced these results with a system latency of approximately 230 ms, which is beyond the limit of real-time perception. The KinectSon system was found to produce similar positional shape to the Vicon Nexus system for all movements, and obtained these results with a system latency of approximately 67 ms, which is within the limit of real-time perception. As such, the KinectSon system has been evaluated as a good candidate for generating real-time audio feedback, however further testing is required to identify suitability of the generated audio feedback. To evaluate the feedback, as part of usability testing, the KinectSon system was used in an agency study. Volunteers with and without upper-limb impairment performed reaching movements whilst using the KinectSon system, and reported the perceived association of the sound generated with the movements performed. For three of the four sonification conditions, a triangular wave pitch modulation component was added to distort the sound. The participants in this study associated their movements with the unmodulated sonification condition stronger than they did with the modulated sonification conditions, indicating that stroke survivors are able to use the KinectSon system and obtain a sense of agency whilst using the system. The thesis concludes with a discussion of the findings of the contributing chapters of this thesis, along with the implications, limitations, and identified future work, within the context of creating a suitable real-time movement sonification system for a large scale study involving an upper limb rehabilitation intervention.Upper limb paresis is a common problem for survivors of stroke, impeding their ability to live independently, and rehabilitation interventions to reduce impairment are highly sought after. The use of audio-based interventions, such as movement sonification, may improve rehabilitation outcomes in this application, however, they are relatively unexplored considering the potential that audio feedback has to enhance motor skill learning. Movement sonification is the process of converting movement associated data to the auditory domain and is touted to be a feasible and effective method for stroke survivors to obtain real-time audio feedback of their movements. To generate real-time audio feedback through movement sonification, a system is required to capture movements, process data, extract the physical domain of interest, convert to the auditory domain, and emit the generated audio. A commercial system that performs this process for gross upper limb movements is currently unavailable, therefore, system creation is required. To begin this process, a mapping review of movement sonification systems in the literature was completed. System components in the literature were identified, keyword coded, and grouped, to provide an overview of the components used within these systems. From these results, choices for components of new movement sonification systems were made based on the popularity and applicability, to create two movement sonification systems, one termed ‘Soniccup’, which uses an Inertial Measurement Unit, and the other termed ‘KinectSon’ which uses an Azure Kinect camera. Both systems were setup to translate position estimates into audio pitch, as an output of the sonification process. Both systems were subsequently used in a comparison study with a Vicon Nexus system to establish similarity of positional shape, and therefore establish audio output similarity. The results indicate that the Soniccup produced positional shape representative of the movement performed, for movements of duration under one second, but performance degraded as the movement duration increased. In addition, the Soniccup produced these results with a system latency of approximately 230 ms, which is beyond the limit of real-time perception. The KinectSon system was found to produce similar positional shape to the Vicon Nexus system for all movements, and obtained these results with a system latency of approximately 67 ms, which is within the limit of real-time perception. As such, the KinectSon system has been evaluated as a good candidate for generating real-time audio feedback, however further testing is required to identify suitability of the generated audio feedback. To evaluate the feedback, as part of usability testing, the KinectSon system was used in an agency study. Volunteers with and without upper-limb impairment performed reaching movements whilst using the KinectSon system, and reported the perceived association of the sound generated with the movements performed. For three of the four sonification conditions, a triangular wave pitch modulation component was added to distort the sound. The participants in this study associated their movements with the unmodulated sonification condition stronger than they did with the modulated sonification conditions, indicating that stroke survivors are able to use the KinectSon system and obtain a sense of agency whilst using the system. The thesis concludes with a discussion of the findings of the contributing chapters of this thesis, along with the implications, limitations, and identified future work, within the context of creating a suitable real-time movement sonification system for a large scale study involving an upper limb rehabilitation intervention

    Physical contraptions as social interaction catalysts

    Get PDF
    corecore