954 research outputs found

    Sonification of probabilistic feedback through granular synthesis

    Get PDF
    We describe a method to improve user feedback, specifically the display of time-varying probabilistic information, through asynchronous granular synthesis. We have applied these techniques to challenging control problems as well as to the sonification of online probabilistic gesture recognition. We're using these displays in mobile, gestural interfaces where visual display is often impractical

    Safe and Sound: Proceedings of the 27th Annual International Conference on Auditory Display

    Get PDF
    Complete proceedings of the 27th International Conference on Auditory Display (ICAD2022), June 24-27. Online virtual conference

    3D-Sonification for Obstacle Avoidance in Brownout Conditions

    Get PDF
    Helicopter brownout is a phenomenon that occurs when making landing approaches in dusty environments, whereby sand or dust particles become swept up in the rotor outwash. Brownout is characterized by partial or total obscuration of the terrain, which degrades visual cues necessary for hovering and safe landing. Furthermore, the motion of the dust cloud produced during brownout can lead to the pilot experiencing motion cue anomalies such as vection illusions. In this context, the stability and guidance control functions can be intermittently or continuously degraded, potentially leading to undetected surface hazards and obstacles as well as unnoticed drift. Safe and controlled landing in brownout can be achieved using an integrated presentation of LADAR and RADAR imagery and aircraft state symbology. However, though detected by the LADAR and displayed on the sensor image, small obstacles can be difficult to discern from the background so that changes in obstacle elevation may go unnoticed. Moreover, pilot workload associated with tracking the displayed symbology is often so high that the pilot cannot give sufficient attention to the LADAR/RADAR image. This paper documents a simulation evaluating the use of 3D auditory cueing for obstacle avoidance in brownout as a replacement for or compliment to LADAR/RADAR imagery

    Real-time hallucination simulation and sonification through user-led development of an iPad augmented reality performance

    Get PDF
    The simulation of visual hallucinations has multiple applications. The authors present a new approach to hallucination simulation, initially developed for a performance, that proved to have uses for individuals suffering from certain types of hallucinations. The system, originally developed with a focus on the visual symptoms of palinopsia experienced by the lead author, allows real-time visual expression using augmented reality via an iPad. It also allows the hallucinations to be converted into sound through visuals sonification. Although no formal experimentation was conducted, the authors report on a number of unsolicited informal responses to the simulator from palinopsia sufferers and the Palinopsia Foundation

    A survey on hardware and software solutions for multimodal wearable assistive devices targeting the visually impaired

    Get PDF
    The market penetration of user-centric assistive devices has rapidly increased in the past decades. Growth in computational power, accessibility, and cognitive device capabilities have been accompanied by significant reductions in weight, size, and price, as a result of which mobile and wearable equipment are becoming part of our everyday life. In this context, a key focus of development has been on rehabilitation engineering and on developing assistive technologies targeting people with various disabilities, including hearing loss, visual impairments and others. Applications range from simple health monitoring such as sport activity trackers, through medical applications including sensory (e.g. hearing) aids and real-time monitoring of life functions, to task-oriented tools such as navigational devices for the blind. This paper provides an overview of recent trends in software and hardware-based signal processing relevant to the development of wearable assistive solutions

    Modelling and validation of synthesis of poly lactic acid using an alternative energy source through a continuous reactive extrusion process

    Get PDF
    PLA is one of the most promising bio-compostable and bio-degradable thermoplastic polymers made from renewable sources. PLA is generally produced by ring opening polymerization (ROP) of lactide using the metallic/bimetallic catalyst (Sn, Zn, and Al) or other organic catalysts in a suitable solvent. In this work, reactive extrusion experiments using stannous octoate Sn(Oct)2 and tri-phenyl phosphine (PPh)3 were considered to perform ROP of lactide. Ultrasound energy source was used for activating and/or boosting the polymerization as an alternative energy (AE) source. Ludovic® software, designed for simulation of the extrusion process, had to be modified in order to simulate the reactive extrusion of lactide and for the application of an AE source in an extruder. A mathematical model for the ROP of lactide reaction was developed to estimate the kinetics of the polymerization process. The isothermal curves generated through this model were then used by Ludovic software to simulate the “reactive” extrusion process of ROP of lactide. Results from the experiments and simulations were compared to validate the simulation methodology. It was observed that the application of an AE source boosts the polymerization of lactide monomers. However, it was also observed that the predicted residence time was shorter than the experimental one. There is potentially a case for reducing the residence time distribution (RTD) in Ludovic® due to the ‘liquid’ monomer flow in the extruder. Although this change in parameters resulted in validation of the simulation, it was concluded that further research is needed to validate this assumption

    Program Comprehension Through Sonification

    Get PDF
    Background: Comprehension of computer programs is daunting, thanks in part to clutter in the software developer's visual environment and the need for frequent visual context changes. Non-speech sound has been shown to be useful in understanding the behavior of a program as it is running. Aims: This thesis explores whether using sound to help understand the static structure of programs is viable and advantageous. Method: A novel concept for program sonification is introduced. Non-speech sounds indicate characteristics of and relationships among a Java program's classes, interfaces, and methods. A sound mapping is incorporated into a prototype tool consisting of an extension to the Eclipse integrated development environment communicating with the sound engine Csound. Developers examining source code can aurally explore entities outside of the visual context. A rich body of sound techniques provides expanded representational possibilities. Two studies were conducted. In the first, software professionals participated in exploratory sessions to informally validate the sound mapping concept. The second study was a human-subjects experiment to discover whether using the tool and sound mapping improve performance of software comprehension tasks. Twenty-four software professionals and students performed maintenance-oriented tasks on two Java programs with and without sound. Results: Viability is strong for differentiation and characterization of software entities, less so for identification. The results show no overall advantage of using sound in terms of task duration at a 5% level of significance. The results do, however, suggest that sonification can be advantageous under certain conditions. Conclusions: The use of sound in program comprehension shows sufficient promise for continued research. Limitations of the present research include restriction to particular types of comprehension tasks, a single sound mapping, a single programming language, and limited training time. Future work includes experiments and case studies employing a wider set of comprehension tasks, sound mappings in domains other than software, and adding navigational capability for use by the visually impaired
    corecore