479 research outputs found

    Conditional Generative Data Augmentation for Clinical Audio Datasets

    Full text link
    In this work, we propose a novel data augmentation method for clinical audio datasets based on a conditional Wasserstein Generative Adversarial Network with Gradient Penalty (cWGAN-GP), operating on log-mel spectrograms. To validate our method, we created a clinical audio dataset which was recorded in a real-world operating room during Total Hip Arthroplasty (THA) procedures and contains typical sounds which resemble the different phases of the intervention. We demonstrate the capability of the proposed method to generate realistic class-conditioned samples from the dataset distribution and show that training with the generated augmented samples outperforms classical audio augmentation methods in terms of classification performance. The performance was evaluated using a ResNet-18 classifier which shows a mean Macro F1-score improvement of 1.70% in a 5-fold cross validation experiment using the proposed augmentation method. Because clinical data is often expensive to acquire, the development of realistic and high-quality data augmentation methods is crucial to improve the robustness and generalization capabilities of learning-based algorithms which is especially important for safety-critical medical applications. Therefore, the proposed data augmentation method is an important step towards improving the data bottleneck for clinical audio-based machine learning systems

    Acquisition models in intraoperative positron surface imaging

    Get PDF
    PURPOSE: Intraoperative imaging aims at identifying residual tumor during surgery. Positron Surface Imaging (PSI) is one of the solutions to help surgeons in a better detection of resection margins of brain tumor, leading to an improved patient outcome. This system relies on a tracked freehand beta probe, using [Formula: see text]F-based radiotracer. Some acquisition models have been proposed in the literature in order to enhance image quality, but no comparative validation study has been performed for PSI. METHODS: In this study, we investigated the performance of different acquisition models by considering validation criteria and normalized metrics. We proposed a reference-based validation framework to perform the comparative study between acquisition models and a basic method. We estimated the performance of several acquisition models in light of four validation criteria: efficiency, computational speed, spatial accuracy and tumor contrast. RESULTS: Selected acquisition models outperformed the basic method, albeit with the real-time aspect compromised. One acquisition model yielded the best performance among all according to the validation criteria: efficiency (1-Spe: 0.1, Se: 0.94), spatial accuracy (max Dice: 0.77) and tumor contrast (max T/B: 5.2). We also found out that above a minimum threshold value of the sampling rate, the reconstruction quality does not vary significantly. CONCLUSION: Our method allowed the comparison of different acquisition models and highlighted one of them according to our validation criteria. This novel approach can be extended to 3D datasets, for validation of future acquisition models dedicated to intraoperative guidance of brain surgery

    Improved Techniques for the Conditional Generative Augmentation of Clinical Audio Data

    Full text link
    Data augmentation is a valuable tool for the design of deep learning systems to overcome data limitations and stabilize the training process. Especially in the medical domain, where the collection of large-scale data sets is challenging and expensive due to limited access to patient data, relevant environments, as well as strict regulations, community-curated large-scale public datasets, pretrained models, and advanced data augmentation methods are the main factors for developing reliable systems to improve patient care. However, for the development of medical acoustic sensing systems, an emerging field of research, the community lacks large-scale publicly available data sets and pretrained models. To address the problem of limited data, we propose a conditional generative adversarial neural network-based augmentation method which is able to synthesize mel spectrograms from a learned data distribution of a source data set. In contrast to previously proposed fully convolutional models, the proposed model implements residual Squeeze and Excitation modules in the generator architecture. We show that our method outperforms all classical audio augmentation techniques and previously published generative methods in terms of generated sample quality and a performance improvement of 2.84% of Macro F1-Score for a classifier trained on the augmented data set, an enhancement of 1.14% in relation to previous work. By analyzing the correlation of intermediate feature spaces, we show that the residual Squeeze and Excitation modules help the model to reduce redundancy in the latent features. Therefore, the proposed model advances the state-of-the-art in the augmentation of clinical audio data and improves the data bottleneck for the design of clinical acoustic sensing systems

    Sonification as a Reliable Alternative to Conventional Visual Surgical Navigation

    Full text link
    Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel sonification solution for alignment tasks in four degrees of freedom based on frequency modulation (FM) synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution

    Augmented reality meeting table: a novel multi-user interface for architectural design

    Get PDF
    Immersive virtual environments have received widespread attention as providing possible replacements for the media and systems that designers traditionally use, as well as, more generally, in providing support for collaborative work. Relatively little attention has been given to date however to the problem of how to merge immersive virtual environments into real world work settings, and so to add to the media at the disposal of the designer and the design team, rather than to replace it. In this paper we report on a research project in which optical see-through augmented reality displays have been developed together with prototype decision support software for architectural and urban design. We suggest that a critical characteristic of multi user augmented reality is its ability to generate visualisations from a first person perspective in which the scale of rendition of the design model follows many of the conventions that designers are used to. Different scales of model appear to allow designers to focus on different aspects of the design under consideration. Augmenting the scene with simulations of pedestrian movement appears to assist both in scale recognition, and in moving from a first person to a third person understanding of the design. This research project is funded by the European Commission IST program (IST-2000-28559)

    Effect of concurrent vitamin A and iodine deficiencies on the thyroid-pituitary axis in rats

    Full text link
    OBJECTIVE: Deficiencies of vitamin A and iodine are common in many developing countries. Vitamin A deficiency (VAD) may adversely affect thyroid metabolism. The study aim was to investigate the effects of concurrent vitamin A and iodine deficiencies on the thyroid-pituitary axis in rats. DESIGN: Weanling rats (n = 56) were fed diets deficient in vitamin A (VAD group), iodine (ID group), vitamin A and iodine (VAD + ID group), or sufficient in both vitamin A and iodine (control) for 30 days in a pair-fed design. Serum retinol (SR), thyroid hormones (FT(4), TT(4), FT(3), and TT(3)), serum thyrotropin (TSH), pituitary TSHbeta mRNA expression levels, and thyroid weights were determined at the end of the depletion period. MAIN OUTCOME: Compared to the control and ID groups, SR concentrations were about 35% lower in the VAD and VAD + ID groups (p < 0.001), indicating moderate VA deficiency. Comparing the VAD and control groups, there were no significant differences in TSH, TSHbeta mRNA, thyroid weight, or thyroid hormone levels. Compared to the control group, serum TSH, TSHbeta mRNA, and thyroid weight were higher (p < 0.05), and FT4 and TT4 were lower (p < 0.001), in the VAD + ID and ID groups. Compared to the ID group, TSH, TSHbeta mRNA, and thyroid weight were higher (p < 0.01) and FT(4) and TT(4) were lower (p < 0.001) in the VAD + ID group. There were no significant differences in TT3 or FT3 concentrations among groups. CONCLUSION: Moderate VAD alone has no measurable effect on the pituitary-thyroid axis. Concurrent ID and VAD produce more severe primary hypothyroidism than ID alone
    • …
    corecore