5,281 research outputs found

    Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure

    Full text link
    As machine learning systems move from computer-science laboratories into the open world, their accountability becomes a high priority problem. Accountability requires deep understanding of system behavior and its failures. Current evaluation methods such as single-score error metrics and confusion matrices provide aggregate views of system performance that hide important shortcomings. Understanding details about failures is important for identifying pathways for refinement, communicating the reliability of systems in different settings, and for specifying appropriate human oversight and engagement. Characterization of failures and shortcomings is particularly complex for systems composed of multiple machine learned components. For such systems, existing evaluation methods have limited expressiveness in describing and explaining the relationship among input content, the internal states of system components, and final output quality. We present Pandora, a set of hybrid human-machine methods and tools for describing and explaining system failures. Pandora leverages both human and system-generated observations to summarize conditions of system malfunction with respect to the input content and system architecture. We share results of a case study with a machine learning pipeline for image captioning that show how detailed performance views can be beneficial for analysis and debugging

    Objective dysphonia quantification in vocal fold paralysis: comparing nonlinear with classical measures

    Get PDF
    Clinical acoustic voice recording analysis is usually performed using classical perturbation measures including jitter, shimmer and noise-to-harmonic ratios. However, restrictive mathematical limitations of these measures prevent analysis for severely dysphonic voices. Previous studies of alternative nonlinear random measures addressed wide varieties of vocal pathologies. Here, we analyze a single vocal pathology cohort, testing the performance of these alternative measures alongside classical measures.

We present voice analysis pre- and post-operatively in unilateral vocal fold paralysis (UVFP) patients and healthy controls, patients undergoing standard medialisation thyroplasty surgery, using jitter, shimmer and noise-to-harmonic ratio (NHR), and nonlinear recurrence period density entropy (RPDE), detrended fluctuation analysis (DFA) and correlation dimension. Systematizing the preparative editing of the recordings, we found that the novel measures were more stable and hence reliable, than the classical measures, on healthy controls.

RPDE and jitter are sensitive to improvements pre- to post-operation. Shimmer, NHR and DFA showed no significant change (p > 0.05). All measures detect statistically significant and clinically important differences between controls and patients, both treated and untreated (p < 0.001, AUC > 0.7). Pre- to post-operation, GRBAS ratings show statistically significant and clinically important improvement in overall dysphonia grade (G) (AUC = 0.946, p < 0.001).

Re-calculating AUCs from other study data, we compare these results in terms of clinical importance. We conclude that, when preparative editing is systematized, nonlinear random measures may be useful UVFP treatment effectiveness monitoring tools, and there may be applications for other forms of dysphonia.
&#xa

    Towards vocal-behaviour and vocal-health assessment using distributions of acoustic parameters

    Get PDF
    Voice disorders at different levels are affecting those professional categories that make use of voice in a sustained way and for prolonged periods of time, the so-called occupational voice users. In-field voice monitoring is needed to investigate voice behaviour and vocal health status during everyday activities and to highlight work-related risk factors. The overall aim of this thesis is to contribute to the identification of tools, procedures and requirements related to the voice acoustic analysis as objective measure to prevent voice disorders, but also to assess them and furnish proof of outcomes during voice therapy. The first part of this thesis includes studies on vocal-load related parameters. Experiments were performed both in-field and in laboratory. A one-school year longitudinal study of teachers’ voice use during working hours was performed in high school classrooms using a voice analyzer equipped with a contact sensor; further measurements took place in the semi-anechoic and reverberant rooms of the National Institute of Metrological Research (I.N.Ri.M.) in Torino (Italy) for investigating the effects of very low and excessive reverberation in speech intensity, using both microphones in air and contact sensors. Within this framework, the contributions of the sound pressure level (SPL) uncertainty estimation using different devices were also assessed with proper experiments. Teachers adjusted their voice significantly with noise and reverberation, both at the beginning and at the end of the school year. Moreover, teachers who worked in the worst acoustic conditions showed higher SPLs and a worse vocal health status at the end of the school year. The minimum value of speech SPL was found for teachers in classrooms with a reverberation time of about 0.8 s. Participants involved into the in-laboratory experiments significantly increased their speech intensity of about 2.0 dB in the semi-anechoic room compared with the reverberant room, when describing a map. Such results are related to the speech monitorings performed with the vocal analyzer, whose uncertainty estimation for SPL differences resulted of about 1 dB. The second part of this thesis was addressed to vocal health and voice quality assessment using different speech materials and devices. Experiments were performed in clinics, in collaboration with the Department of Surgical Sciences of Università di Torino (Italy) and the Department of Clinical Science, Intervention and Technology of Karolinska Institutet in Stockholm (Sweden). Individual distributions of Cepstral Peak Prominence Smoothed (CPPS) from voluntary patients and control subjects were investigated in sustained vowels, reading, free speech and excerpted vowels from continuous speech, which were acquired with microphones in air and contact sensors. The main influence quantities of the estimated cepstral parameters were also identified, which are the fundamental frequency of the vocalization and the broadband noise superimposed to the signal. In addition, the reliability of CPPS estimation with respect to the frequency content of the vocal spectrum was evaluated, which is mainly dependent on the bandwidth of the measuring chain used to acquire the vocal signal. Regarding the speech materials acquired with the microphone in air, the 5th percentile resulted the best statistic for CPPS distributions that can discriminate healthy and unhealthy voices in sustained vowels, while the 95th percentile was the best in both reading and free speech tasks. The discrimination thresholds were 15 dB (95\% Confidence Interval, CI, of 0.7 dB) and 18 dB (95\% CI of 0.6 dB), respectively, where lower values indicate a high probability to have unhealthy voice. Preliminary outcomes on excerpted vowels from continuous speech stated that a CPPS mean value lower than 14 dB designates pathological voices. CPPS distributions were also effective as proof of outcomes after interventions, e.g. voice therapy and phonosurgery. Concerning the speech materials acquired with the electret contact sensor, a reasonable discrimination power was only obtained in the case of sustained vowel, where the standard deviation of CPPS distribution higher than 1.1 dB (95\% CI of 0.2 dB) indicates a high probability to have unhealthy voice. Further results indicated that a reliable estimation of CPPS parameters is obtained provided that the frequency content of the spectrum is not lower than 5 kHz: such outcome provides a guideline on the bandwidth of the measuring chain used to acquire the vocal signal

    The structural correlates of statistical information processing during speech perception

    Get PDF
    The processing of continuous and complex auditory signals such as speech relies on the ability to use statistical cues (e.g. transitional probabilities). In this study, participants heard short auditory sequences composed either of Italian syllables or bird songs and completed a regularity-rating task. Behaviorally, participants were better at differentiating between levels of regularity in the syllable sequences than in the bird song sequences. Inter-individual differences in sensitivity to regularity for speech stimuli were correlated with variations in surface-based cortical thickness (CT). These correlations were found in several cortical areas including regions previously associated with statistical structure processing (e.g. bilateral superior temporal sulcus, left precentral sulcus and inferior frontal gyrus), as well other regions (e.g. left insula, bilateral superior frontal gyrus/sulcus and supramarginal gyrus). In all regions, this correlation was positive suggesting that thicker cortex is related to higher sensitivity to variations in the statistical structure of auditory sequences. Overall, these results suggest that inter-individual differences in CT within a distributed network of cortical regions involved in statistical structure processing, attention and memory is predictive of the ability to detect structural structure in auditory speech sequences

    Learning from disagreement: a survey

    Get PDF
    Many tasks in Natural Language Processing (nlp) and Computer Vision (cv) offer evidence that humans disagree, from objective tasks such as part-of-speech tagging to more subjective tasks such as classifying an image or deciding whether a proposition follows from certain premises. While most learning in artificial intelligence (ai) still relies on the assumption that a single (gold) interpretation exists for each item, a growing body of research aims to develop learning methods that do not rely on this assumption. In this survey, we review the evidence for disagreements on nlp and cv tasks, focusing on tasks for which substantial datasets containing this information have been created. We discuss the most popular approaches to training models from datasets containing multiple judgments potentially in disagreement. We systematically compare these different approaches by training them with each of the available datasets, considering several ways to evaluate the resulting models. Finally, we discuss the results in depth, focusing on four key research questions, and assess how the type of evaluation and the characteristics of a dataset determine the answers to these questions. Our results suggest, first of all, that even if we abandon the assumption of a gold standard, it is still essential to reach a consensus on how to evaluate models. This is because the relative performance of the various training methods is critically affected by the chosen form of evaluation. Secondly, we observed a strong dataset effect. With substantial datasets, providing many judgments by high-quality coders for each item, training directly with soft labels achieved better results than training from aggregated or even gold labels. This result holds for both hard and soft evaluation. But when the above conditions do not hold, leveraging both gold and soft labels generally achieved the best results in the hard evaluation. All datasets and models employed in this paper are freely available as supplementary materials

    A Bias-Variance-Covariance Decomposition of Kernel Scores for Generative Models

    Full text link
    Generative models, like large language models, are becoming increasingly relevant in our daily lives, yet a theoretical framework to assess their generalization behavior and uncertainty does not exist. Particularly, the problem of uncertainty estimation is commonly solved in an ad-hoc manner and task dependent. For example, natural language approaches cannot be transferred to image generation. In this paper we introduce the first bias-variance-covariance decomposition for kernel scores and their associated entropy. We propose unbiased and consistent estimators for each quantity which only require generated samples but not the underlying model itself. As an application, we offer a generalization evaluation of diffusion models and discover how mode collapse of minority groups is a contrary phenomenon to overfitting. Further, we demonstrate that variance and predictive kernel entropy are viable measures of uncertainty for image, audio, and language generation. Specifically, our approach for uncertainty estimation is more predictive of performance on CoQA and TriviaQA question answering datasets than existing baselines and can also be applied to closed-source models.Comment: Preprin

    Developing implant technologies and evaluating brain-machine interfaces using information theory

    Full text link
    Brain-machine interfaces (BMIs) hold promise for restoring motor functions in severely paralyzed individuals. Invasive BMIs are capable of recording signals from individual neurons and typically provide the highest signal-to-noise ratio. Despite many efforts in the scientific community, BMI technology is still not reliable enough for widespread clinical application. The most prominent challenges include biocompatibility, stability, longevity, and lack of good models for informed signal processing and BMI comparison. To address the problem of low signal quality of chronic probes, in the first part of the thesis one such design, the Neurotrophic Electrode, was modified by increasing its channel capacity to form a Neurotrophic Array (NA). Specifically, single wires were replaced with stereotrodes and the total number of recording wires was increased. This new array design was tested in a rhesus macaque performing a delayed saccade task. The NA recorded little single unit spiking activity, and its local field potentials (LFPs) correlated with presented visual stimuli and saccade locations better than did extracted spikes. The second part of the thesis compares the NA to the Utah Array (UA), the only other micro-array approved for chronic implantation in a human brain. The UA recorded significantly more spiking units, which had larger amplitudes than NA spikes. This was likely due to differences in the array geometry and construction. LFPs on the NA electrodes were more correlated with each other than those on the UA. These correlations negatively impacted the NA's information capacity when considering more than one recording site. The final part of this dissertation applies information theory to develop objective measures of BMI performance. Currently, decoder information transfer rate (ITR) is the most popular BMI information performance metric. However, it is limited by the selected decoding algorithm and does not represent the full task information embedded in the recorded neural signal. A review of existing methods to estimate ITR is presented, and these methods are interpreted within a BMI context. A novel Gaussian mixture Monte Carlo method is developed to produce good ITR estimates with a low number of trials and high number of dimensions, as is typical for BMI applications

    Multi-level Uncertain Fatigue Analysis of a Truss under Incomplete Available Information

    Get PDF
    We predict the fatigue life of a planar tubular truss when geometrical parameters, material properties, and live loads are non-deterministic. A multi-level calculation uncertainty quantification framework code was designed to aggregate the finite element method and fatigue-induced sequential failures. Due to the incompleteness of the aleatory-type inputs, the maximum entropy principle was applied. Two sensitivity analyses were performed to report the most influencing factors. In terms of variance, the results suggest that the slope of the curve crack growth rate Ă— stress intensity factor range is the most influencing factor related to fatigue life. Furthermore, due to the application of the entropy concept, the fatigue crack growth boundaries and fatigue crack size boundaries obtained provide the most unbiased fatigue crack design mapping. These boundaries allow the designer to select the worst-case fatigue scenario, besides being able to predict the crack behavior at a required confidence level
    • …
    corecore