1,764 research outputs found
Augmenting Deep Learning Performance in an Evidential Multiple Classifier System
International audienceThe main objective of this work is to study the applicability of ensemble methods in the context of deep learning with limited amounts of labeled data. We exploit an ensemble of neural networks derived using Monte Carlo dropout, along with an ensemble of SVM classifiers which owes its effectiveness to the hand-crafted features used as inputs and to an active learning procedure. In order to leverage each classifier's respective strengths, we combine them in an evidential framework, which models specifically their imprecision and uncertainty. The application we consider in order to illustrate the interest of our Multiple Classifier System is pedestrian detection in high-density crowds, which is ideally suited for its difficulty, cost of labeling and intrinsic imprecision of annotation data. We show that the fusion resulting from the effective modeling of uncertainty allows for performance improvement, and at the same time, for a deeper interpretation of the result in terms of commitment of the decision
Evidence in Neuroimaging: Towards a Philosophy of Data Analysis
Neuroimaging technology is the most widely used tool to study human cognition. While originally a promising tool for mapping the content of cognitive theories onto the structures of the brain, recently developed tools for the analysis, handling and sharing of data have changed the theoretical landscape of cognitive neuroscience. Even with these advancements philosophical analyses of evidence in neuroimaging remain skeptical of the promise of neuroimaging technology. These views often treat the analysis techniques used to make sense of data produced in a neuroimaging experiment as one, attributing the inferential limitations of analysis pipelines to the technology as a whole. Situated against the neuroscientists own critical assessment of their methods and the limitations of those methods, this skepticism appears based on a misunderstanding of the role data analysis techniques play in neuroimaging research. My project picks up here, examining how data analysis techniques, such as pattern classification analysis, are used to assess the evidential value of neuroimaging data. The project takes the form of three papers. In the first I identify the use of multiple data analysis techniques as an important aspect of the data interpretation process that is overlooked by critics. In the second I develop an account of inferences in neuroimaging research that is sensitive to this use of data analysis techniques, arguing that interpreting neuroimaging data is a process of isolating and explaining a variety of data patterns. In the third I argue that the development and uptake of new techniques for analyzing data must be accompanied by changes in research practices and standards of evidence if they are to promote knowledge generation. My approach to this work is both traditionally philosophical, insofar as it involves reading and analyzing the work of philosophers and neuroscientists, and embedded insofar as most of the research was conducted while engaged in attending lab meetings and participating in the work of those scientists whose work is the object of my research
Recommended from our members
Predicting the Effectiveness of Medical Interventions
This dissertation explores several conceptual and methodological features of medical science that influence our ability to accurately predict medical effectiveness. Making reliable predictions about the effectiveness of medical treatments is crucial to mitigating death and disease and improving individual and population health, yet generating such predictions is fraught with difficulties. Each chapter deals with a unique challenge to predictions of medical effectiveness.
In Chapter 1, I describe and analyze the principles underlying three prominent approaches to physical disease classificationâthe etiological, symptom-based, and pathophysiological modelsâand suggest a broadly pragmatic approach whereby appropriate classifications depend on the goal in question. In line with this, I argue that particular features of the pathophysiological model, such as its focus on disease mechanisms, make it most relevant for predicting medical effectiveness.
Chapter 2 explores the debate between those who argue that statistical evidence is sufficient for inferring medical effectiveness and those who argue that we require both statistical and mechanistic evidence. I focus on the question of how mechanistic and statistical evidence can be integrated. I highlight some of the challenges facing formal techniques, such as Bayesian networks, and use Toulminâs model of argumentation to offer a complementary model of evidence amalgamation, which allows for the systematic integration of statistical and mechanistic evidence.
In Chapter 3, I focus on p-hacking, an application of analytic techniques that may lead to exaggerated experimental results. I use philosophical tools from decision theory to illustrate how severe the effects of p-hacking can be. While it is typically considered epistemically questionable and practically harmful, I appeal to the argument from inductive risk to defend the view that there are some contexts in which p-hacking may be warranted.
Chapter 4 draws attention to a particular set of biases plaguing medical research: Meta-biases. I argue that biases of this type, such as publication bias and sponsorship bias, lead to exaggerated clinical trial results. I then offer a framework, the bias dynamics model, that corrects for the influence of meta-biases on estimations of medical effectiveness.
In Chapter 5, I argue against the prominent view that AI models are not explainable by showing how four familiar accounts of scientific explanation can be applied to neural networks. The confusion about explaining AI models is due to the conflation of âexplainabilityâ, âunderstandabilityâ, and âinterpretabilityâ. To remedy this, I offer a novel account of AI-interpretability, according to which an interpretation is something one does to an explanation with the explicit aim of producing another, more understandable, explanation.The Oppenheimer Memorial Trust
Department of History and Philosophy of Science, Cambridge Universit
Condition Assessment Models for Sewer Pipelines
Underground pipeline system is a complex infrastructure system that has significant impact on social, environmental and economic aspects. Sewer pipeline networks are considered to be an extremely expensive asset. This study aims to develop condition assessment models for sewer pipeline networks. Seventeen factors affecting the condition of sewer network were considered for gravity pipelines in addition to the operating pressure for pressurized pipelines. Two different methodologies were adopted for modelsâ development. The first method by using an integrated Fuzzy Analytic Network Process (FANP) and Monte-Carlo simulation and the second method by using FANP, fuzzy set theory (FST) and Evidential Reasoning (ER). The modelsâ output is the assessed pipeline condition. In order to collect the necessary data for developing the models, questionnaires were distributed among experts in sewer pipelines in the state of Qatar. In addition, actual data for an existing sewage network in the state of Qatar was used to validate the modelsâ outputs. The âGround Disturbanceâ factor was found to be the most influential factor followed by the âLocationâ factor with a weight of 10.6% and 9.3% for pipelines under gravity and 8.8% and 8.6% for pipelines under pressure, respectively. On the other hand, the least affecting factor was the âLengthâ followed by âDiameterâ with weights of 2.2% and 2.5% for pipelines under gravity and 2.5% and 2.6% for pipelines under pressure. The developed models were able to satisfactorily assess the conditions of deteriorating sewer pipelines with an average validity of approximately 85% for the first approach and 86% for the second approach. The developed models are expected to be a useful tool for decision makers to properly plan for their inspections and provide effective rehabilitation of sewer networks.1)- NPRP grant # (NPRP6-357-2-150) from the QatarNational Research Fund (Member of Qatar Foundation)
2)-Tarek Zayed, Professor of Civil Engineeringat Concordia University for his support in the analysis part, the Public Works
3)-Authority of Qatar (ASHGAL) for their support in the data collection
Recommended from our members
Privacy audits and the certified public accountant.
Abstract not availabl
Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing
Existing regression models tend to fall short in both accuracy and
uncertainty estimation when the label distribution is imbalanced. In this
paper, we propose a probabilistic deep learning model, dubbed variational
imbalanced regression (VIR), which not only performs well in imbalanced
regression but naturally produces reasonable uncertainty estimation as a
byproduct. Different from typical variational autoencoders assuming I.I.D.
representations (a data point's representation is not directly affected by
other data points), our VIR borrows data with similar regression labels to
compute the latent representation's variational distribution; furthermore,
different from deterministic regression models producing point estimates, VIR
predicts the entire normal-inverse-gamma distributions and modulates the
associated conjugate distributions to impose probabilistic reweighting on the
imbalanced data, thereby providing better uncertainty estimation. Experiments
in several real-world datasets show that our VIR can outperform
state-of-the-art imbalanced regression models in terms of both accuracy and
uncertainty estimation. Code will soon be available at
https://github.com/Wang-ML-Lab/variational-imbalanced-regression.Comment: Accepted at NeurIPS 202
Integrated Formal Analysis of Timed-Triggered Ethernet
We present new results related to the verification of the Timed-Triggered Ethernet (TTE) clock synchronization protocol. This work extends previous verification of TTE based on model checking. We identify a suboptimal design choice in a compression function used in clock synchronization, and propose an improvement. We compare the original design and the improved definition using the SAL model checker
Uncertainty-aware Grounded Action Transformation towards Sim-to-Real Transfer for Traffic Signal Control
Traffic signal control (TSC) is a complex and important task that affects the
daily lives of millions of people. Reinforcement Learning (RL) has shown
promising results in optimizing traffic signal control, but current RL-based
TSC methods are mainly trained in simulation and suffer from the performance
gap between simulation and the real world. In this paper, we propose a
simulation-to-real-world (sim-to-real) transfer approach called UGAT, which
transfers a learned policy trained from a simulated environment to a real-world
environment by dynamically transforming actions in the simulation with
uncertainty to mitigate the domain gap of transition dynamics. We evaluate our
method on a simulated traffic environment and show that it significantly
improves the performance of the transferred RL policy in the real world.Comment: 8 pages, 3 figure
Modeling and Analysis of Mixed Synchronous/Asynchronous Systems
Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled
- âŠ