15 research outputs found

    Bayesian Active Learning for Personalization and Uncertainty Quantification in Cardiac Electrophysiological Model

    Get PDF
    Cardiacvascular disease is the top death causing disease worldwide. In recent years, high-fidelity personalized models of the heart have shown an increasing capability to supplement clinical cardiology for improved patient-specific diagnosis, prediction, and treatment planning. In addition, they have shown promise to improve scientific understanding of a variety of disease mechanisms. However, model personalization by estimating the patient-specific tissue properties that are in the form of parameters of a physiological model is challenging. This is because tissue properties, in general, cannot be directly measured and they need to be estimated from measurements that are indirectly related to them through a physiological model. Moreover, these unknown tissue properties are heterogeneous and spatially varying throughout the heart volume presenting a difficulty of high-dimensional (HD) estimation from indirect and limited measurement data. The challenge in model personalization, therefore, summarizes to solving an ill-posed inverse problem where the unknown parameters are HD and the forward model is complex with a non-linear and computationally expensive physiological model. In this dissertation, we address the above challenge with following contributions. First, to address the concern of a complex forward model, we propose the surrogate modeling of the complex target function containing the forward model – an objective function in deterministic estimation or a posterior probability density function in probabilistic estimation – by actively selecting a set of training samples and a Bayesian update of the prior over the target function. The efficient and accurate surrogate of the expensive target function obtained in this manner is then utilized to accelerate either deterministic or probabilistic parameter estimation. Next, within the framework of Bayesian active learning we enable active surrogate learning over a HD parameter space with two novel approaches: 1) a multi-scale optimization that can adaptively allocate higher resolution to heterogeneous tissue regions and lower resolution to homogeneous tissue regions; and 2) a generative model from low-dimensional (LD) latent code to HD tissue properties. Both of these approaches are independently developed and tested within a parameter optimization framework. Furthermore, we devise a novel method that utilizes the surrogate pdf learned on an estimated LD parameter space to improve the proposal distribution of Metropolis Hastings for an accelerated sampling of the exact posterior pdf. We evaluate the presented methods on estimating local tissue excitability of a cardiac electrophysiological model in both synthetic data experiments and real data experiments. Results demonstrate that the presented methods are able to improve the accuracy and efficiency in patient-specific model parameter estimation in comparison to the existing approaches used for model personalization

    An Analysis of the Effects of Decoding Algorithms on Fairness in Open-Ended Language Generation

    Full text link
    Several prior works have shown that language models (LMs) can generate text containing harmful social biases and stereotypes. While decoding algorithms play a central role in determining properties of LM generated text, their impact on the fairness of the generations has not been studied. We present a systematic analysis of the impact of decoding algorithms on LM fairness, and analyze the trade-off between fairness, diversity and quality. Our experiments with top-pp, top-kk and temperature decoding algorithms, in open-ended language generation, show that fairness across demographic groups changes significantly with change in decoding algorithm's hyper-parameters. Notably, decoding algorithms that output more diverse text also output more texts with negative sentiment and regard. We present several findings and provide recommendations on standardized reporting of decoding details in fairness evaluations and optimization of decoding algorithms for fairness alongside quality and diversity.Comment: Accepted at IEEE SLT 202

    Multi-VALUE: A Framework for Cross-Dialectal English NLP

    Full text link
    Dialect differences caused by regional, social, and economic factors cause performance discrepancies for many groups of language technology users. Inclusive and equitable language technology must critically be dialect invariant, meaning that performance remains constant over dialectal shifts. Current systems often fall short of this ideal since they are designed and tested on a single dialect: Standard American English (SAE). We introduce a suite of resources for evaluating and achieving English dialect invariance. The resource is called Multi-VALUE, a controllable rule-based translation system spanning 50 English dialects and 189 unique linguistic features. Multi-VALUE maps SAE to synthetic forms of each dialect. First, we use this system to stress tests question answering, machine translation, and semantic parsing. Stress tests reveal significant performance disparities for leading models on non-standard dialects. Second, we use this system as a data augmentation technique to improve the dialect robustness of existing systems. Finally, we partner with native speakers of Chicano and Indian English to release new gold-standard variants of the popular CoQA task. To execute the transformation code, run model checkpoints, and download both synthetic and gold-standard dialectal benchmark datasets, see http://value-nlp.org.Comment: ACL 202
    corecore