435 research outputs found

    A novel sonification approach to support the diagnosis of Alzheimer's dementia

    Get PDF
    Alzheimer’s disease is the most common neurodegenerative form of dementia that steadily worsens and eventually leads to death. Its set of symptoms include loss of cognitive function and memory decline. Structural and functional imaging methods such as CT, MRI, and PET scans play an essential role in the diagnosis process, being able to identify specific areas of cerebral damages. While the accuracy of these imaging techniques increases over time, the severity assessment of dementia remains challenging and susceptible to cognitive and perceptual errors due to intra-reader variability among physicians. Doctors have not agreed upon standardized measurement of cell loss used to specifically diagnose dementia among individuals. These limitations have led researchers to look for supportive diagnosis tools to enhance the spectrum of diseases characteristics and peculiarities. Here is presented a supportive auditory tool to aid in diagnosing patients with different levels of Alzheimer’s. This tool introduces an audible parameter mapped upon three different brain’s lobes. The motivating force behind this supportive auditory technique arise from the fact that AD is distinguished by a decrease of the metabolic activity (hypometabolism) in the parietal and temporal lobes of the brain. The diagnosis is then performed by comparing metabolic activity of the affected lobes to the metabolic activity of other lobes that are not generally affected by AD (i.e., sensorimotor cortex). Results from the diagnosis process compared with the ground truth show that physicians were able to categorize different levels of AD using the sonification generated in this study with higher accuracy than using a standard diagnosis procedure, based on the visualization alone

    Sonification as a Reliable Alternative to Conventional Visual Surgical Navigation

    Full text link
    Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel sonification solution for alignment tasks in four degrees of freedom based on frequency modulation (FM) synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution

    Real-time hallucination simulation and sonification through user-led development of an iPad augmented reality performance

    Get PDF
    The simulation of visual hallucinations has multiple applications. The authors present a new approach to hallucination simulation, initially developed for a performance, that proved to have uses for individuals suffering from certain types of hallucinations. The system, originally developed with a focus on the visual symptoms of palinopsia experienced by the lead author, allows real-time visual expression using augmented reality via an iPad. It also allows the hallucinations to be converted into sound through visuals sonification. Although no formal experimentation was conducted, the authors report on a number of unsolicited informal responses to the simulator from palinopsia sufferers and the Palinopsia Foundation

    Ways of Guided Listening: Embodied approaches to the design of interactive sonifications

    Get PDF
    This thesis presents three use cases for interactive feedback. In each case users interact with a system and receive feedback: the primary source of feedback is visual, while a second source of feedback is offered as sonification. The first use case comprised an interactive sonification system for use by pathologists in the triage stage of cancer diagnostics. Image features derived from computational homology are mapped to a soundscape with integrated auditory glance indicating potential regions of interests. The resulting prototype did not meet the requirements of a domain expert. In the second case this thesis presents an interactive sonification plug-in developed for a software package for interactive visualisation of macromolecular complexes. A framework for building different sonification methods in Python and an OSC-controlled sound producing software was established along with sonification methods and a general sonification plugin. It received generally positive feedback, but the mapping was deemed not very transparent. From these cases and ideas in sonification design literature, the Subject-Position-Based Sonification Design Framework (SPBDF) was developed. It explores an alternative conception of design: that working from a frame of reference encompassing a non-expert audience will lead towards sonifications that are more easily understood. A method for the analysis of sonifications according to its criteria is outlined and put into practice to evaluate a range of sonifications. This framework was evaluated in the third use case, a system for sonified feedback for an exercise device designed for back pain rehabilitation. Two different sonifications, one using SPBDF as basis of their design, were evaluated, indicating that interactive sonification can provide valuable feedback and improve task performance (decrease the mean speed) when the soundscape employed invokes an appropriate emotional response in the user

    Immersive analytics for oncology patient cohorts

    Get PDF
    This thesis proposes a novel interactive immersive analytics tool and methods to interrogate the cancer patient cohort in an immersive virtual environment, namely Virtual Reality to Observe Oncology data Models (VROOM). The overall objective is to develop an immersive analytics platform, which includes a data analytics pipeline from raw gene expression data to immersive visualisation on virtual and augmented reality platforms utilising a game engine. Unity3D has been used to implement the visualisation. Work in this thesis could provide oncologists and clinicians with an interactive visualisation and visual analytics platform that helps them to drive their analysis in treatment efficacy and achieve the goal of evidence-based personalised medicine. The thesis integrates the latest discovery and development in cancer patients’ prognoses, immersive technologies, machine learning, decision support system and interactive visualisation to form an immersive analytics platform of complex genomic data. For this thesis, the experimental paradigm that will be followed is in understanding transcriptomics in cancer samples. This thesis specifically investigates gene expression data to determine the biological similarity revealed by the patient's tumour samples' transcriptomic profiles revealing the active genes in different patients. In summary, the thesis contributes to i) a novel immersive analytics platform for patient cohort data interrogation in similarity space where the similarity space is based on the patient's biological and genomic similarity; ii) an effective immersive environment optimisation design based on the usability study of exocentric and egocentric visualisation, audio and sound design optimisation; iii) an integration of trusted and familiar 2D biomedical visual analytics methods into the immersive environment; iv) novel use of the game theory as the decision-making system engine to help the analytics process, and application of the optimal transport theory in missing data imputation to ensure the preservation of data distribution; and v) case studies to showcase the real-world application of the visualisation and its effectiveness

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The Models and Analysis of Vocal Emissions with Biomedical Applications (MAVEBA) workshop came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy. This edition celebrates twenty years of uninterrupted and succesfully research in the field of voice analysis
    corecore