293 research outputs found
Models and Analysis of Vocal Emissions for Biomedical Applications
The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies
Models and Analysis of Vocal Emissions for Biomedical Applications
The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies
Recommended from our members
Quantifying, Understanding and Predicting Differences Between Planned and Delivered Dose to Organs at Risk in Head & Neck Cancer Patients Undergoing Radical Radiotherapy to Promote Intelligently Targeted Adaptive Radiotherapy
Introduction: Radical radiotherapy (RT) is an effective but toxic treatment for head and neck cancer (HNC). Contemporary radiotherapy techniques sculpt dose to target disease and avoid organs at risk (OARs), but anatomical change during treatment mean that the radiation dose delivered to the patient â delivered dose (DA), is different to that anticipated at planning â planned dose (DP). Modifying the RT plan during treatment â Adaptive Radiotherapy (ART) â could mitigate these risks by reducing dose to OARs. However, clinical data to guide patient selection for, and timing of ART, are for lacking.
Methods: 337 patients with HNC were recruited to the Cancer Research UK VoxTox study. Demographic, disease and treatment data were collated, and both DP and DA to organs at risk (OARs) were computed from daily megavoltage CT image guidance scans, using an open-source deformable image registration package (Elastix). Toxicity data were prospectively collected. Relationships between DP, DA and late toxicities were investigated with univariate, and logistic regression normal tissue complication probability (NTCP) modelling approaches. A sub-study of VoxTox recruited 18 patients who had MRI scans before RT fractions 1, 6, 16, and 26. Changes in salivary gland volumes and relative apparent diffusion coefficient (ADC) values were measured and related to toxicity events.
Results: Spinal cord dose differences were small, and not predicted by weight loss or shape change. Mean DA to all other OARs was higher than DP; factors predicting higher DA included primary disease site, concomitant therapy, shape change and advanced neck disease. Nine patients (3.7%) saw DA>DP by 2Gy to more than half of the OARs assessed. These patients all had received bilateral neck RT for N-stage 2b oropharyngeal cancer. Strong uni- and multivariate relationships between OAR dose and toxicity were seen. Differences between DA and DP-based dose-toxicity models were minimal, and not statistically significant. On MRI, both parotid and submandibular glands shrank during treatment, whilst relative ADC rose. Relationships with toxicity were inconclusive.
Conclusions: Small differences between OAR DP and DA mean that DA-based toxicity prediction models confer negligible additional benefit at the population level. Factors such as primary disease sub-site, concomitant systemic therapy, staging and shape change may help to select the patients that do develop clinically significant dose differences, and would benefit most from ART for toxicity reduction
Artificial Intelligence for Suicide Assessment using Audiovisual Cues: A Review
Death by suicide is the seventh leading death cause worldwide. The recent
advancement in Artificial Intelligence (AI), specifically AI applications in
image and voice processing, has created a promising opportunity to
revolutionize suicide risk assessment. Subsequently, we have witnessed
fast-growing literature of research that applies AI to extract audiovisual
non-verbal cues for mental illness assessment. However, the majority of the
recent works focus on depression, despite the evident difference between
depression symptoms and suicidal behavior and non-verbal cues. This paper
reviews recent works that study suicide ideation and suicide behavior detection
through audiovisual feature analysis, mainly suicidal voice/speech acoustic
features analysis and suicidal visual cues. Automatic suicide assessment is a
promising research direction that is still in the early stages. Accordingly,
there is a lack of large datasets that can be used to train machine learning
and deep learning models proven to be effective in other, similar tasks.Comment: Manuscript submitted to Arificial Intelligence Reviews (2022
Multi-modal and multi-dimensional biomedical image data analysis using deep learning
There is a growing need for the development of computational methods and tools for automated, objective, and quantitative analysis of biomedical signal and image data to facilitate disease and treatment monitoring, early diagnosis, and scientific discovery. Recent advances in artificial intelligence and machine learning, particularly in deep learning, have revolutionized computer vision and image analysis for many application areas. While processing of non-biomedical signal, image, and video data using deep learning methods has been very successful, high-stakes biomedical applications present unique challenges such as different image modalities, limited training data, need for explainability and interpretability etc. that need to be addressed. In this dissertation, we developed novel, explainable, and attention-based deep learning frameworks for objective, automated, and quantitative analysis of biomedical signal, image, and video data. The proposed solutions involve multi-scale signal analysis for oraldiadochokinesis studies; ensemble of deep learning cascades using global soft attention mechanisms for segmentation of meningeal vascular networks in confocal microscopy; spatial attention and spatio-temporal data fusion for detection of rare and short-term video events in laryngeal endoscopy videos; and a novel discrete Fourier transform driven class activation map for explainable-AI and weakly-supervised object localization and segmentation for detailed vocal fold motion analysis using laryngeal endoscopy videos. Experiments conducted on the proposed methods showed robust and promising results towards automated, objective, and quantitative analysis of biomedical data, that is of great value for potential early diagnosis and effective disease progress or treatment monitoring.Includes bibliographical references
- âŠ