5,070 research outputs found

    Evidence-Based Detection of Pancreatic Canc

    Get PDF
    This study is an effort to develop a tool for early detection of pancreatic cancer using evidential reasoning. An evidential reasoning model predicts the likelihood of an individual developing pancreatic cancer by processing the outputs of a Support Vector Classifier, and other input factors such as smoking history, drinking history, sequencing reads, biopsy location, family and personal health history. Certain features of the genomic data along with the mutated gene sequence of pancreatic cancer patients was obtained from the National Cancer Institute (NIH) Genomic Data Commons (GDC). This data was used to train the SVC. A prediction accuracy of ~85% with a ROC AUC of 83.4% was achieved. Synthetic data was assembled in different combinations to evaluate the working of evidential reasoning model. Using this, variations in the belief interval of developing pancreatic cancer are observed. When the model is provided with an input of high smoking history and family history of cancer, an increase in the evidential reasoning interval in belief of pancreatic cancer and support in the machine learning model prediction is observed. Likewise, decrease in the quantity of genetic material and an irregularity in the cellular structure near the pancreas increases support in the machine learning classifier’s prediction of having pancreatic cancer. This evidence-based approach is an attempt to diagnose the pancreatic cancer at a premalignant stage. Future work includes using the real sequencing reads as well as accurate habits and real medical and family history of individuals to increase the efficiency of the evidential reasoning model. Next steps also involve trying out different machine learning models to observe their performance on the dataset considered in this study

    Watermarks

    Get PDF

    Reliability measure assignment to sonar for robust target differentiation

    Get PDF
    Cataloged from PDF version of article.This article addresses the use of evidential reasoning and majority voting in multi-sensor decision making for target differentiation using sonar sensors. Classification of target primitives which constitute the basic building blocks of typical surfaces in uncluttered robot environments has been considered. Multiple sonar sensors placed at geographically different sensing sites make decisions about the target type based on their measurement patterns. Their decisions are combined to reach a group decision through Dempster-Shafer evidential reasoning and majority voting, The sensing nodes view the targets at different ranges and angles so that they have different degrees of reliability. Proper accounting for these different reliabilities has the potential to improve decision making compared to simple uniform treatment of the sensors. Consistency problems arising in majority voting are addressed with a view to achieving high classification performance. This is done by introducing preference ordering among the possible target types and assigning reliability measures (which essentially serve as weights) to each decision-making node based on the target range and azimuth estimates it makes and the belief values it assigns to possible target types. The results bring substantial improvement over evidential reasoning and simple majority voting by reducing the target misclassification. rate. (C) 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved

    Comparative analysis of different approaches to target differentiation and localization with sonar

    Get PDF
    Cataloged from PDF version of article.This study compares the performances of di erent methods for the di erentiation and localization of commonly encountered features in indoor environments. Di erentiation of such features is of interest for intelligent systems in a variety of applications such as system control based on acoustic signal detection and identi/cation, map building, navigation, obstacle avoidance, and target tracking. Di erent representations of amplitude and time-of-2ight measurement patterns experimentally acquired from a real sonar system are processed. The approaches compared in this study include the target di erentiation algorithm, Dempster–Shafer evidential reasoning, di erent kinds of voting schemes, statistical pattern recognition techniques (k-nearest neighbor classi/er, kernel estimator, parameterized density estimator, linear discriminant analysis, and fuzzy c-means clustering algorithm), and arti/cial neural networks. The neural networks are trained with di erent input signal representations obtained using pre-processing techniques such as discrete ordinary and fractional Fourier, Hartley and wavelet transforms, and Kohonen’s self-organizing feature map. The use of neural networks trained with the back-propagation algorithm, usually with fractional Fourier transform or wavelet pre-processing results in near perfect di erentiation, around 85% correct range estimation and around 95% correct azimuth estimation, which would be satisfactory in a wide range of applications. (C) 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserve

    E-Synthesis: A Bayesian Framework for Causal Assessment in Pharmacosurveillance

    Get PDF
    Background: Evidence suggesting adverse drug reactions often emerges unsystematically and unpredictably in form of anecdotal reports, case series and survey data. Safety trials and observational studies also provide crucial information regarding the (un-)safety of drugs. Hence, integrating multiple types of pharmacovigilance evidence is key to minimising the risks of harm. Methods: In previous work, we began the development of a Bayesian framework for aggregating multiple types of evidence to assess the probability of a putative causal link between drugs and side effects. This framework arose out of a philosophical analysis of the Bradford Hill Guidelines. In this article, we expand the Bayesian framework and add “evidential modulators,” which bear on the assessment of the reliability of incoming study results. The overall framework for evidence synthesis, “E-Synthesis”, is then applied to a case study. Results: Theoretically and computationally, E-Synthesis exploits coherence of partly or fully independent evidence converging towards the hypothesis of interest (or of conflicting evidence with respect to it), in order to update its posterior probability. With respect to other frameworks for evidence synthesis, our Bayesian model has the unique feature of grounding its inferential machinery on a consolidated theory of hypothesis confirmation (Bayesian epistemology), and in allowing any data from heterogeneous sources (cell-data, clinical trials, epidemiological studies), and methods (e.g., frequentist hypothesis testing, Bayesian adaptive trials, etc.) to be quantitatively integrated into the same inferential framework. Conclusions: E-Synthesis is highly flexible concerning the allowed input, while at the same time relying on a consistent computational system, that is philosophically and statistically grounded. Furthermore, by introducing evidential modulators, and thereby breaking up the different dimensions of evidence (strength, relevance, reliability), E-Synthesis allows them to be explicitly tracked in updating causal hypotheses
    • …
    corecore