122 research outputs found

    Towards autonomous diagnostic systems with medical imaging

    Get PDF
    Democratizing access to high quality healthcare has highlighted the need for autonomous diagnostic systems that a non-expert can use. Remote communities, first responders and even deep space explorers will come to rely on medical imaging systems that will provide them with Point of Care diagnostic capabilities. This thesis introduces the building blocks that would enable the creation of such a system. Firstly, we present a case study in order to further motivate the need and requirements of autonomous diagnostic systems. This case study primarily concerns deep space exploration where astronauts cannot rely on communication with earth-bound doctors to help them through diagnosis, nor can they make the trip back to earth for treatment. Requirements and possible solutions about the major challenges faced with such an application are discussed. Moreover, this work describes how a system can explore its perceived environment by developing a Multi Agent Reinforcement Learning method that allows for implicit communication between the agents. Under this regime agents can share the knowledge that benefits them all in achieving their individual tasks. Furthermore, we explore how systems can understand the 3D properties of 2D depicted objects in a probabilistic way. In Part II, this work explores how to reason about the extracted information in a causally enabled manner. A critical view on the applications of causality in medical imaging, and its potential uses is provided. It is then narrowed down to estimating possible future outcomes and reasoning about counterfactual outcomes by embedding data on a pseudo-Riemannian manifold and constraining the latent space by using the relativistic concept of light cones. By formalizing an approach to estimating counterfactuals, a computationally lighter alternative to the abduction-action-prediction paradigm is presented through the introduction of Deep Twin Networks. Appropriate partial identifiability constraints for categorical variables are derived and the method is applied in a series of medical tasks involving structured data, images and videos. All methods are evaluated in a wide array of synthetic and real life tasks that showcase their abilities, often achieving state-of-the-art performance or matching the existing best performance while requiring a fraction of the computational cost.Open Acces

    The Scientific Impossibility of Plausibility

    Get PDF
    This interdisciplinary Article employs a scientific approach to euthanize any suggestion that plausibility pleading is empirically supportable. In the Twombly and Iqbal decisions in 2007 and 2009, the Supreme Court replaced the liberal notice pleading standard of Conley v. Gibson with a heightened requirement that pleadings must be plausible to survive a motion to dismiss. Unlike previous scholarship, I address plausibility in light of a broader defect plaguing all legal theory; courts are not required to defend their hypotheses or legal theories in the same empirical manner as scientists. For example, lower courts and practitioners alike are forced to assume and accept the existence of the plausibility standard simply because it was conjured by the Supreme Court. Admittedly, a scientific perspective may limit development of the law, but it ensures that judges, scholars, and legal practitioners are practicing a body of law which at least partly reflects the reality and limitations of our physical universe. This Article demonstrates plausibility pleading is devoid of any connection to that reality. The Article begins with a brief analysis of what the language of Iqbal and Twombly claims plausibility pleading is, followed by a careful examination of the additional subtext in the decisions which explains what plausibility is not. I demonstrate that the most conspicuous and important aspect of this subtext is the significant judicial effort the Twombly Court expended to emphasize the consistency of its decision with the 2002 Swierkiewicz decision, in which a unanimous Supreme Court reaffirmed the previously existing motion to dismiss standard. Next, in accord with the Article’s unique approach, I examine the actual pleadings in the Swierkiewicz case. Therein, the analysis of the pleadings reveals the absolute falsity of the Supreme Court’s claim that Twombly is consistent with Swierkiewicz. I explain how the motion to dismiss in Swierkiewicz expressly argued for the application of the identical plausibility standard adopted in Twombly and Iqbal, and I further explain how this is the same standard the Court unanimously rejected seven years prior in Swierkiewicz as being beyond its power to implement. Using an analogy to Bayesian mathematical theory, the Article demonstrates, despite the Supreme Court’s claim to the contrary, that the plausibility analysis is a probability analysis. I argue this probability analysis is abhorrent to the constitutionally mandated division of labor between judge and jury in the civil system, and it represents a radical, normative shift in established pleading standards. The Article next applies modern neuroscientific research discussing limits on human beings’ ability to empathize, and it specifically discusses the existence of a genetic predisposition to bias against phenotypically distinct individuals. I explain how this research dispels the scholarly suggestion that plausibility and its encouragement of “judicial experience and common sense” is a waypoint to a laudable, empathy based, utopian judicial state. Additionally, the Article demonstrates the first step in determining plausibility—the separation of law from fact is widely acknowledged, including by the Supreme Court itself—is as an impossible feat. Further, the Article reveals how markedly similar plausibility is to a constitutionally prohibited credibility analysis. Finally, the Article suggests plausibility analysis is a nonsensical amalgam of Federal Rules of Civil Procedure 8, 9(b), 11 and 12. I demonstrate any pleading deemed not plausible pursuant to Rule 12(b)(6) also violates Rule 11. Further, I show that the pleading standard of Rule 8 is now indistinguishable from and possibly higher than Rule 9(b)’s heightened pleading standard

    Latent Print Examination and Human Factors: Improving the Practice Through a Systems Approach: The Report of the Expert Working Group on Human Factors in Latent Print Analysis

    Get PDF
    Fingerprints have provided a valuable method of personal identification in forensic science and criminal investigations for more than 100 years. Fingerprints left at crime scenes generally are latent prints—unintentional reproductions of the arrangement of ridges on the skin made by the transfer of materials (such as amino acids, proteins, polypeptides, and salts) to a surface. Palms and the soles of feet also have friction ridge skin that can leave latent prints. The examination of a latent print consists of a series of steps involving a comparison of the latent print to a known (or exemplar) print. Courts have accepted latent print evidence for the past century. However, several high-profile cases in the United States and abroad have highlighted the fact that human errors can occur, and litigation and expressions of concern over the evidentiary reliability of latent print examinations and other forensic identification procedures has increased in the last decade. “Human factors” issues can arise in any experience- and judgment-based analytical process such as latent print examination. Inadequate training, extraneous knowledge about the suspects in the case or other matters, poor judgment, health problems, limitations of vision, complex technology, and stress are but a few factors that can contribute to errors. A lack of standards or quality control, poor management, insufficient resources, and substandard working conditions constitute other potentially contributing factors

    Essays on Bayesian Analysis of Time Varying Economic Patterns

    Get PDF
    __Abstract__ Knowing the history of your topic of interest is important: It teaches what happened in the past, helps to understand the present, and allows one to look ahead in the future. Given my interest in the development of Bayesian econometrics, this thesis starts with a description of its history since the early 1960s. My aim is to quantify the increasing popularity of Bayesian econometrics by performing a data analysis in the sense of measuring both publication and citation records in major journals. This will give a concrete idea about where Bayesian econometrics came from and in which journals its papers appeared. With this information, one will be able to predict some future patterns. Indeed, the analysis indicates that Bayesian econometrics has a bright future. I also look at how the topics and authors of the papers in the data set are connected to each other using the bibliometric mapping technique. This analysis gives insight in the most important topics examined in the Bayesian econometrics literature. Among these, I find that a topic like unobserved components models and time varying patterns has shown tremendous progress. Finally, I explore some issues and debates about Bayesian econometrics. Given that the analysis of time varying patterns has become an important topic, I explore this issue in the following two chapters. The subject of Chapter 3 is twofold. First, I give a basic exposition of the technical issues that a Bayesian econometrician faces in terms of modeling and inference when she is interested in forecasting US real GDP growth by using a time varying parameter model using simulation based Bayesian inference. Having observed particular time varying patterns in the level and volatility of the series, I propose a time varying parameter model that incorporates both level shifts and stochastic volatility components. I further try to explain the GDP growth series using survey data on expectations. Doing posterior and predictive analyses, the forecasting performances of several models are compared. The results of this chapter may become an input for more policy oriented models on growth and stability. In addition to output growth stability, price stability is also an important policy objective. Both households and businesses are interested in the behavior of prices over time and follow the decisions of policymakers in order to be able to make sound decisions. Moreover, policymakers are interested in making inflation forecasts to be able to make sound policy decisions and guide households and businesses. Therefore, inflation forecasting is important for everybody. I deal with this topic in Chapter 4. In this chapter, I explore forecasting of US inflation via the class of New Keynesian Phillips Curve (NKPC) models using original data. I propose various extended versions of the NKPC models and make a comparative study based on posterior and predictive analyses. I also show results from using models that are misspecified and from using survey inflation expectations data. The latter is done since most macroeconomic series do not contain strong data evidence on typical patterns and using survey data may help strengthening the information in the likelihood. The results indicate that inflation forecasts are better described by the proposed class of extended NKPC models and this information may be useful for policies such as inflation targeting. Section 1.2 summarizes the contributions of this thesis. Section 1.3 presents an outline of the thesis and summarizes each chapter

    Popper's Severity of Test

    Full text link

    Auditing Symposium VIII: Proceedings of the 1986 Touche Ross/University of Kansas Symposium on Auditing Problems

    Get PDF
    Discussant\u27s response to On the economics of product differentiation in auditing / Howard R. Osharow; Unresolved issues in classical audit sample evaluations / Donald R. Nichols, Rajendra P. Srivastava, Bart H. Ward; Discussant\u27s response to Unresolved issues in classical audit sample evaluations / Abraham D. Akresh; Under the spreading chestnut tree, accountants\u27 legal liability -- A historical perspective / Paul J. Ostling; Impact of technological events and trends on audit evidence in the year 2000: Phase I / Gary L. Holstrum, Theodore J. Mock, Robert N. West; Discussant\u27s Response to Impact of technological events and trends on audit evidence in the year 2000: Phase I; Is the second standard of fieldwork necessary / Thomas P. Bintinger; Discussant\u27s response to Is the second standard of fieldwork necessary / Andrew D. Bailey; Interim report on the development of an expert system for the auditor\u27s loan loss evaluation / Kirk P. Kelly, Gary S. Ribar, John J. Willingham; Discussant\u27s response to Interim report on the development of an expert system for the auditor\u27s loan loss evaluation / William F. Messier; Work of the Special Investigations Committee / R. K. (Robert Kuhn) Mautz (1915-2002); Discussant\u27s response to Under the spreading chestnut tree, accountants\u27 legal liability -- A historical perspective / Thomas A. Gavin; Assertion based approach to auditing / Donald A. Leslie; Discussant\u27s response to An assertion-based approach to auditing / William L. Felixhttps://egrove.olemiss.edu/dl_proceedings/1007/thumbnail.jp

    Learning with Low-Quality Data: Multi-View Semi-Supervised Learning with Missing Views

    Get PDF
    The focus of this thesis is on learning approaches for what we call ``low-quality data'' and in particular data in which only small amounts of labeled target data is available. The first part provides background discussion on low-quality data issues, followed by preliminary study in this area. The remainder of the thesis focuses on a particular scenario: multi-view semi-supervised learning. Multi-view learning generally refers to the case of learning with data that has multiple natural views, or sets of features, associated with it. Multi-view semi-supervised learning methods try to exploit the combination of multiple views along with large amounts of unlabeled data in order to learn better predictive functions when limited labeled data is available. However, lack of complete view data limits the applicability of multi-view semi-supervised learning to real world data. Commonly, one data view is readily and cheaply available, but additionally views may be costly or only available in some cases. This thesis work aims to make multi-view semi-supervised learning approaches more applicable to real world data specifically by addressing the issue of missing views through both feature generation and active learning, and addressing the issue of model selection for semi-supervised learning with limited labeled data. This thesis introduces a unified approach for handling missing view data in multi-view semi-supervised learning tasks, which applies to both data with completely missing additional views and data only missing views in some instances. The idea is to learn a feature generation function mapping one view to another with the mapping biased to encourage the features generated to be useful for multi-view semi-supervised learning algorithms. The mapping is then used to fill in views as pre-processing. Unlike previously proposed single-view multi-view learning approaches, the proposed approach is able to take advantage of additional view data when available, and for the case of partial view presence is the first feature-generation approach specifically designed to take into account the multi-view semi-supervised learning aspect. The next component of this thesis is the analysis of an active view completion scenario. In some tasks, it is possible to obtain missing view data for a particular instance, but with some associated cost. Recent work has shown an active selection strategy can be more effective than a random one. In this thesis, a better understanding of active approaches is sought, and it is demonstrated that the effectiveness of an active selection strategy over a random one can depend on the relationship between the views. Finally, an important component of making multi-view semi-supervised learning applicable to real world data is the task of model selection, an open problem which is often avoided entirely in previous work. For cases of very limited labeled training data the commonly used cross-validation approach can become ineffective. This thesis introduces a re-training alternative to the method-dependent approaches similar in motivation to cross-validation, that involves generating new training and test data by sampling from the large amount of unlabeled data and estimated conditional probabilities for the labels. The proposed approaches are evaluated on a variety of multi-view semi-supervised learning data sets, and the experimental results demonstrate their efficacy

    Generalization Through the Lens of Learning Dynamics

    Full text link
    A machine learning (ML) system must learn not only to match the output of a target function on a training set, but also to generalize to novel situations in order to yield accurate predictions at deployment. In most practical applications, the user cannot exhaustively enumerate every possible input to the model; strong generalization performance is therefore crucial to the development of ML systems which are performant and reliable enough to be deployed in the real world. While generalization is well-understood theoretically in a number of hypothesis classes, the impressive generalization performance of deep neural networks has stymied theoreticians. In deep reinforcement learning (RL), our understanding of generalization is further complicated by the conflict between generalization and stability in widely-used RL algorithms. This thesis will provide insight into generalization by studying the learning dynamics of deep neural networks in both supervised and reinforcement learning tasks.Comment: PhD Thesi

    Mathematical Modeling of Stress Management via Decisional Control

    Get PDF
    Engaging the environment through reason, humankind evaluates information, compares it to a standard of desirability, and selects the best option available. Stress is theorized to arise from the perception of survival-related demands on an organism. Cognitive efforts are no mere intellectual exercise when ontologically backed by survival-relevant reward or punishment. This dissertation examines the stressful impact, and countervailing peaceful impact, of environmental demands on cognitive efforts and of successful cognitive efforts on a person’s day-to-day environment, through mathematical modeling of ‘decisional control’. A modeling approach to clinical considerations is introduced in the first paper, “Clinical Mathematical Psychology”. A general exposition is made of the need for, and value of, mathematical modeling in examining psychological questions wherein complex relations between quantities are expected and observed. Subsequently, two documents are presented that outline an analytical and a computational basis, respectively, for assessing threat and its potential reduction. These two studies are followed by two empirical studies that instantiate the properties of the decisional control model, and examine the relation of stress and cognition within the context of psychometric, psychophysiological, and cognition-based dependent measures. Confirming the central hypothesis, results support the validity and reliability of best-option availability Pr(t1) as an index of cumulative situational threat E(t). Strong empirical support also emerges for disproportional obstruction of control by ‘uncertainty’, a lack of both information and control, compared to less obstruction of control by ‘no-choice’, a simple lack of control. Empirical evidence suggests this effect extends beyond reduction in control to an increase in cognitive efforts when even control is not present. This highlights an existing feature of the decisional control model, Outcome Set Size, an index of efforts at cognitive evaluation of potential encounters regardless of control availability. In addition to these findings, the precise specification of model expectancies and consequent experimental design, refinement of research tools, and proposal of an integrative formula linking empirical and theoretical results are unique contributions
    • 

    corecore