94 research outputs found

    An Update on Cardioprotection A Review of the Latest Adjunctive Therapies to Limit Myocardial Infarction Size in Clinical Trials

    Get PDF
    Acute myocardial infarction (AMI) with subsequent left ventricular dysfunction and heart failure continues to be a major cause of morbidity and mortality in the Western world. Rapid advances in the treatment of AMI, mainly through timely reperfusion, have substantially improved outcomes in patients presenting with acute coronary syndrome and particularly ST-segment elevation myocardial infarction. A vast amount of research, both translational and clinical, has been published on various pharmacological and interventional techniques to prevent myocardial cell death during the time of ischemia and subsequent reperfusion. Several methods of cardioprotection have shown the ability to limit myocardial infarction size in clinical trials. Examples of interventional techniques that have proven beneficial are ischemic post-conditioning and remote ischemic per-conditioning, both of which can reduce infarction size. Lowering core body temperature with cold saline infusion and cooling catheters have also been shown to be effective in certain circumstances. The most promising pharmaceutical cardioprotective agents at this time appear to be adenosine, atrial natriuretic peptide, and cyclosporine, with other potentially effective medications in the pipeline. Additional pre-clinical and clinical research is needed to further investigate newer cardioprotective strategies to continue the current trend of improving outcomes following AMI

    motilitAI: a machine learning framework for automatic prediction of human sperm motility

    Get PDF
    In this article, human semen samples from the Visem dataset are automatically assessed with machine learning methods for their quality with respect to sperm motility. Several regression models are trained to automatically predict the percentage (0–100) of progressive, non-progressive, and immotile spermatozoa. The videos are adopted for unsupervised tracking and two different feature extraction methods—in particular custom movement statistics and displacement features. We train multiple neural networks and support vector regression models on the extracted features. Best results are achieved using a linear Support Vector Regressor with an aggregated and quantized representation of individual displacement features of each sperm cell. Compared to the best submission of the Medico Multimedia for Medicine challenge, which used the same dataset and splits, the mean absolute error (MAE) could be reduced from 8.83 to 7.31. We provide the source code for our experiments on GitHub (Code available at: https://github.com/EIHW/motilitAI)

    Distinguishing between pre- and post-treatment in the speech of patients with chronic obstructive pulmonary disease

    Full text link
    Chronic obstructive pulmonary disease (COPD) causes lung inflammation and airflow blockage leading to a variety of respiratory symptoms; it is also a leading cause of death and affects millions of individuals around the world. Patients often require treatment and hospitalisation, while no cure is currently available. As COPD predominantly affects the respiratory system, speech and non-linguistic vocalisations present a major avenue for measuring the effect of treatment. In this work, we present results on a new COPD dataset of 20 patients, showing that, by employing personalisation through speaker-level feature normalisation, we can distinguish between pre- and post-treatment speech with an unweighted average recall (UAR) of up to 82\,\% in (nested) leave-one-speaker-out cross-validation. We further identify the most important features and link them to pathological voice properties, thus enabling an auditory interpretation of treatment effects. Monitoring tools based on such approaches may help objectivise the clinical status of COPD patients and facilitate personalised treatment plans.Comment: Accepted in INTERSPEECH 202

    Zero-shot personalization of speech foundation models for depressed mood monitoring

    Get PDF
    The monitoring of depressed mood plays an important role as a diagnostic tool in psychotherapy. An automated analysis of speech can provide a non-invasive measurement of a patient’s affective state. While speech has been shown to be a useful biomarker for depression, existing approaches mostly build population-level models that aim to predict each individual’s diagnosis as a (mostly) static property. Because of inter-individual differences in symptomatology and mood regulation behaviors, these approaches are ill-suited to detect smaller temporal variations in depressed mood. We address this issue by introducing a zero-shot personalization of large speech foundation models. Compared with other personalization strategies, our work does not require labeled speech samples for enrollment. Instead, the approach makes use of adapters conditioned on subject-specific metadata. On a longitudinal dataset, we show that the method improves performance compared with a set of suitable baselines. Finally, applying our personalization strategy improves individual-level fairness

    Distinguishing between pre- and post-treatment in the speech of patients with chronic obstructive pulmonary disease

    Get PDF
    Chronic obstructive pulmonary disease (COPD) causes lung inflammation and airflow blockage leading to a variety of respiratory symptoms; it is also a leading cause of death and affects millions of individuals around the world. Patients often require treatment and hospitalisation, while no cure is currently available. As COPD predominantly affects the respiratory system, speech and non-linguistic vocalisations present a major avenue for measuring the effect of treatment. In this work, we present results on a new COPD dataset of 20 patients, showing that, by employing personalisation through speaker-level feature normalisation, we can distinguish between pre- and post-treatment speech with an unweighted average recall (UAR) of up to 82% in (nested) leave-one-speaker-out cross-validation. We further identify the most important features and link them to pathological voice properties, thus enabling an auditory interpretation of treatment effects. Monitoring tools based on such approaches may help objectivise the clinical status of COPD patients and facilitate personalised treatment plans

    The influence of pleasant and unpleasant odours on the acoustics of speech

    Get PDF
    Olfaction, i. e., the sense of smell is referred to as the ‘emotional sense’, as it has been shown to elicit affective responses. Yet, its influence on speech production has not been investigated. In this paper, we introduce a novel speech-based smell recognition approach, drawing from the fields of speech emotion recognition and personalised machine learning. In particular, we collected a corpus of 40 female speakers reading 2 short stories while either no scent, unpleasant odour (fish), or pleasant odour (peach) is applied through a nose clip. Further, we present a machine learning pipeline for the extraction of data representations, model training, and personalisation of the trained models. In a leave-one-speaker-out cross-validation, our best models trained on state-of-the-art wav2vec features achieve a classification rate of 68 % when distinguishing between speech produced under the influence of negative scent and no applied scent. In addition, we highlight the importance of personalisation approaches, showing that a speaker-based feature normalisation substantially improves performance across the evaluated experiments. In summary, the presented results indicate that odours have a weak, but measurable effect on the acoustics of speech

    Men Scare Me More: Gender Differences in Social Fear Conditioning in Virtual Reality

    Get PDF
    Women nearly twice as often develop social anxiety disorder (SAD) compared to men. The reason for this difference is still being debated. The present study investigates gender differences and the effect of male versus female agents in low (LSA) and high socially anxious (HSA) participants regarding the acquisition and extinction of social fear in virtual reality (VR). In a social fear conditioning (SFC) paradigm, 60 participants actively approached several agents, some of which were paired with an aversive unconditioned stimulus (US) consisting of a verbal rejection and spitting simulated by an aversive air blast (CS C), or without an US (CS). Primary outcome variables were defined for each of the 4 levels of emotional reactions including experience (fear ratings), psychophysiology (fear-potentiated startle), behavior (avoidance), and cognition (recognition task). Secondary outcome variables were personality traits, contingency ratings, heart rate (HR), and skin conductance response (SCR). As hypothesized, fear ratings for CS C increased significantly during acquisition and the differentiation between CS C and CS vanished during extinction. Additionally, women reported higher fear compared to men. Furthermore, a clear difference in the fear-potentiated startle response between male CS C and CS at the end of acquisition indicates successful SFC to male agents in both groups. Concerning behavior, results exhibited successful SFC in both groups and a general larger distance to agents in HSA than LSA participants. Furthermore, HSA women maintained a larger distance to male compared to female agents. No such differences were found for HSA men. Regarding recognition, participants responded with higher sensitivity to agent than object stimuli, suggesting a higher ability to distinguish the target from the distractor for social cues, which were on focus during SFC. Regarding the secondary physiological outcome variables, we detected an activation in HR response during acquisition, but there were no differences between stimuli or groups. Moreover, we observed a gender but no CS+/CS- differences in SCR. SFC was successfully induced and extinguished according to the primary outcome variables. VR is an interesting tool to measure emotional learning processes on different outcome levels with enhanced ecological validity. Future research should further investigate social fear learning mechanisms for developing more efficient treatments of SAD

    Emotion and themes recognition in music utilising convolutional and recurrent neural networks

    Get PDF
    Emotion is an inherent aspect of music, and associations to music can be made via both life experience and specific musical techniques applied by the composer. Computational approaches for music recognition have been well-established in the research community; however, deep approaches have been limited and not yet comparable to conventional approaches. In this study, we present our fusion system of end-to-end convolutional recurrent neural networks (CRNN) and pre-trained convolutional feature extractors for music emotion and theme recognition1. We train 9 models and conduct various late fusion experiments. Our best performing model (team name: AugLi) achieves 74.2 % ROC-AUC on the test partition which is 1.6 percentage points over the baseline system of the MediaEval 2019 Emotion & Themes in Music task

    The ACM Multimedia 2023 Computational Paralinguistics Challenge: emotion share & requests

    Get PDF
    The ACM Multimedia 2023 Computational Paralinguistics Chal- lenge addresses two different problems for the first time in a re- search competition under well-defined conditions: In the Emotion Share Sub-Challenge, a regression on speech has to be made; and in the Requests Sub-Challenges, requests and complaints need to be de- tected. We describe the Sub-Challenges, baseline feature extraction, and classifiers based on the ‘usual’ ComParE features, the auDeep toolkit, and deep feature extraction from pre-trained CNNs using the DeepSpectrum toolkit; in addition, wav2vec2 models are used
    • …
    corecore