4,856 research outputs found

    Advancing probabilistic and causal deep learning in medical image analysis

    Get PDF
    The power and flexibility of deep learning have made it an indispensable tool for tackling modern machine learning problems. However, this flexibility comes at the cost of robustness and interpretability, which can lead to undesirable or even harmful outcomes. Deep learning models often fail to generalise to real-world conditions and produce unforeseen errors that hinder wide adoption in safety-critical critical domains such as healthcare. This thesis presents multiple works that address the reliability problems of deep learning in safety-critical domains by being aware of its vulnerabilities and incorporating more domain knowledge when designing and evaluating our algorithms. We start by showing how close collaboration with domain experts is necessary to achieve good results in a real-world clinical task - the multiclass semantic segmentation of traumatic brain injuries (TBI) lesions in head CT. We continue by proposing an algorithm that models spatially coherent aleatoric uncertainty in segmentation tasks by considering the dependencies between pixels. The lack of proper uncertainty quantification is a robustness issue which is ubiquitous in deep learning. Tackling this issue is of the utmost importance if we want to deploy these systems in the real world. Lastly, we present a general framework for evaluating image counterfactual inference models in the absence of ground-truth counterfactuals. Counterfactuals are extremely useful to reason about models and data and to probe models for explanations or mistakes. As a result, their evaluation is critical for improving the interpretability of deep learning models.Open Acces

    Comparing and Improving the Accuracy of Nonprobability Samples: Profiling Australian Surveys

    Get PDF
    There has been a great deal of debate in the survey research community about the accuracy of nonprobability sample surveys. This work aims to provide empirical evidence about the accuracy of nonprobability samples and to investigate the performance of a range of post-survey adjustment approaches (calibration or matching methods) to reduce bias, and lead to enhanced inference. We use data from five nonprobability online panel surveys and com­pare their accuracy (pre- and post-survey adjustment) to four probability surveys, including data from a probability online panel. This article adds value to the existing research by assessing methods for causal inference not previously applied for this purpose and dem­onstrates the value of various types of covariates in mitigation of bias in nonprobability online panels. Investigating different post-survey adjustment scenarios based on the avail­ability of auxiliary data, we demonstrated how carefully designed post-survey adjustment can reduce some bias in survey research using nonprobability samples. The results show that the quality of post-survey adjustments is, first and foremost, dependent on the avail­ability of relevant high-quality covariates which come from a representative large-scale probability-based survey data and match those in nonprobability data. Second, we found little difference in the efficiency of different post-survey adjustment methods, and inconsis­tent evidence on the suitability of 'webographics' and other internet-associated covariates for mitigating bias in nonprobability samples

    Some models are useful, but how do we know which ones? Towards a unified Bayesian model taxonomy

    Full text link
    Probabilistic (Bayesian) modeling has experienced a surge of applications in almost all quantitative sciences and industrial areas. This development is driven by a combination of several factors, including better probabilistic estimation algorithms, flexible software, increased computing power, and a growing awareness of the benefits of probabilistic learning. However, a principled Bayesian model building workflow is far from complete and many challenges remain. To aid future research and applications of a principled Bayesian workflow, we ask and provide answers for what we perceive as two fundamental questions of Bayesian modeling, namely (a) "What actually is a Bayesian model?" and (b) "What makes a good Bayesian model?". As an answer to the first question, we propose the PAD model taxonomy that defines four basic kinds of Bayesian models, each representing some combination of the assumed joint distribution of all (known or unknown) variables (P), a posterior approximator (A), and training data (D). As an answer to the second question, we propose ten utility dimensions according to which we can evaluate Bayesian models holistically, namely, (1) causal consistency, (2) parameter recoverability, (3) predictive performance, (4) fairness, (5) structural faithfulness, (6) parsimony, (7) interpretability, (8) convergence, (9) estimation speed, and (10) robustness. Further, we propose two example utility decision trees that describe hierarchies and trade-offs between utilities depending on the inferential goals that drive model building and testing

    Framework for real time behavior interpretation from traffic video

    Get PDF
    © 2005 IEEE.Video-based surveillance systems have a wide range of applications for traffic monitoring, as they provide more information as compared to other sensors. In this paper, we present a rule-based framework for behavior and activity detection in traffic videos obtained from stationary video cameras. Moving targets are segmented from the images and tracked in real time. These are classified into different categories using a novel Bayesian network approach, which makes use of image features and image-sequence- based tracking results for robust classification. Tracking and classification results are used in a programmed context to analyze behavior. For behavior recognition, two types of interactions have mainly been considered. One is interaction between two or more mobile targets in the field of view (FoV) of the camera. The other is interaction between targets and stationary objects in the environment. The framework is based on two types of a priori information: 1) the contextual information of the camera’s FoV, in terms of the different stationary objects in the scene and 2) sets of predefined behavior scenarios, which need to be analyzed in different contexts. The system can recognize behavior from videos and give a lexical output of the detected behavior. It also is capable of handling uncertainties that arise due to errors in visual signal processing. We demonstrate successful behavior recognition results for pedestrian– vehicle interaction and vehicle–checkpost interactions.Kumar, P.; Ranganath, S.; Huang Weimin; Sengupta, K
    corecore