120 research outputs found
A Taxonomy of Explainable Bayesian Networks
Artificial Intelligence (AI), and in particular, the explainability thereof,
has gained phenomenal attention over the last few years. Whilst we usually do
not question the decision-making process of these systems in situations where
only the outcome is of interest, we do however pay close attention when these
systems are applied in areas where the decisions directly influence the lives
of humans. It is especially noisy and uncertain observations close to the
decision boundary which results in predictions which cannot necessarily be
explained that may foster mistrust among end-users. This drew attention to AI
methods for which the outcomes can be explained. Bayesian networks are
probabilistic graphical models that can be used as a tool to manage
uncertainty. The probabilistic framework of a Bayesian network allows for
explainability in the model, reasoning and evidence. The use of these methods
is mostly ad hoc and not as well organised as explainability methods in the
wider AI research field. As such, we introduce a taxonomy of explainability in
Bayesian networks. We extend the existing categorisation of explainability in
the model, reasoning or evidence to include explanation of decisions. The
explanations obtained from the explainability methods are illustrated by means
of a simple medical diagnostic scenario. The taxonomy introduced in this paper
has the potential not only to encourage end-users to efficiently communicate
outcomes obtained, but also support their understanding of how and, more
importantly, why certain predictions were made
Estimating time-to-onset of adverse drug reactions from spontaneous reporting databases.
International audienceBACKGROUND: Analyzing time-to-onset of adverse drug reactions from treatment exposure contributes to meeting pharmacovigilance objectives, i.e. identification and prevention. Post-marketing data are available from reporting systems. Times-to-onset from such databases are right-truncated because some patients who were exposed to the drug and who will eventually develop the adverse drug reaction may do it after the time of analysis and thus are not included in the data. Acknowledgment of the developments adapted to right-truncated data is not widespread and these methods have never been used in pharmacovigilance. We assess the use of appropriate methods as well as the consequences of not taking right truncation into account (naïve approach) on parametric maximum likelihood estimation of time-to-onset distribution. METHODS: Both approaches, naïve or taking right truncation into account, were compared with a simulation study. We used twelve scenarios for the exponential distribution and twenty-four for the Weibull and log-logistic distributions. These scenarios are defined by a set of parameters: the parameters of the time-to-onset distribution, the probability of this distribution falling within an observable values interval and the sample size. An application to reported lymphoma after anti TNF-¿ treatment from the French pharmacovigilance is presented. RESULTS: The simulation study shows that the bias and the mean squared error might in some instances be unacceptably large when right truncation is not considered while the truncation-based estimator shows always better and often satisfactory performances and the gap may be large. For the real dataset, the estimated expected time-to-onset leads to a minimum difference of 58 weeks between both approaches, which is not negligible. This difference is obtained for the Weibull model, under which the estimated probability of this distribution falling within an observable values interval is not far from 1. CONCLUSIONS: It is necessary to take right truncation into account for estimating time-to-onset of adverse drug reactions from spontaneous reporting databases
Childbirth and consequent atopic disease: emerging evidence on epigenetic effects based on the hygiene and EPIIC hypotheses
Background: In most high and middle income countries across the world, at least 1:4 women give birth by cesarean
section. Rates of labour induction and augmentation are rising steeply; and in some countries up to 50 % of laboring
women and newborns are given antibiotics. Governments and international agencies are increasingly concerned about
the clinical, economic and psychosocial effects of these interventions.
Discussion: There is emerging evidence that certain intrapartum and early neonatal interventions might affect the
neonatal immune response in the longer term, and perhaps trans-generationally. Two theories lead the debate in this
area. Those aligned with the hygiene (or ‘Old Friends’) hypothesis have examined the effect of gut microbiome colonization
secondary to mode of birth and intrapartum/neonatal pharmacological interventions on immune response and epigenetic
phenomena. Those working with the EPIIC (Epigenetic Impact of Childbirth) hypothesis are concerned with the effects of
eustress and dys-stress on the epigenome, secondary to mode of birth and labour interventions.
Summary: This paper examines the current and emerging findings relating to childbirth and atopic/autoimmune
disease from the perspective of both theories, and proposes an alliance of research effort. This is likely to accelerate
the discovery of important findings arising from both approaches, and to maximize the timely understanding of the
longer-term consequences of childbirth practices
Targeting Huntington’s disease through histone deacetylases
Huntington’s disease (HD) is a debilitating neurodegenerative condition with significant burdens on both patient and healthcare costs. Despite extensive research, treatment options for patients with this condition remain limited. Aberrant post-translational modification (PTM) of proteins is emerging as an important element in the pathogenesis of HD. These PTMs include acetylation, phosphorylation, methylation, sumoylation and ubiquitination. Several families of proteins are involved with the regulation of these PTMs. In this review, I discuss the current evidence linking aberrant PTMs and/or aberrant regulation of the cellular machinery regulating these PTMs to HD pathogenesis. Finally, I discuss the evidence suggesting that pharmacologically targeting one of these protein families the histone deacetylases may be of potential therapeutic benefit in the treatment of HD
Psycholinguistic variables matter in odor naming
People from Western societies generally find it difficult to name odors. In trying to explain this, the olfactory literature has proposed several theories that focus heavily on properties of the odor itself but rarely discuss properties of the label used to describe it. However, recent studies show speakers of languages with dedicated smell lexicons can name odors with relative ease. Has the role of the lexicon been overlooked in the olfactory literature? Word production studies show properties of the label, such as word frequency and semantic context, influence naming; but this field of research focuses heavily on the visual domain. The current study combines methods from both fields to investigate word production for olfaction in two experiments. In the first experiment, participants named odors whose veridical labels were either high-frequency or low-frequency words in Dutch, and we found that odors with high-frequency labels were named correctly more often. In the second experiment, edibility was used for manipulating semantic context in search of a semantic interference effect, presenting the odors in blocks of edible and inedible odor source objects to half of the participants. While no evidence was found for a semantic interference effect, an effect of word frequency was again present. Our results demonstrate psycholinguistic variables-such as word frequency-are relevant for olfactory naming, and may, in part, explain why it is difficult to name odors in certain languages. Olfactory researchers cannot afford to ignore properties of an odor's label
Reducing False Alarm Rates in Neonatal Intensive Care: A New Machine Learning Approach
In neonatal intensive care units (NICUs), 87.5% of alarms by the monitoring system are false alarms, often caused by the movements of the neonates. Such false alarms are not only stressful for the neonates as well as for their parents and caregivers, but may also lead to longer response times in real critical situations. The aim of this project was to reduce the rates of false alarms by employing machine learning algorithms (MLA), which intelligently analyze data stemming from standard physiological monitoring in combination with cerebral oximetry data (in-house built, OxyPrem).
MATERIALS & METHODS
Four popular MLAs were selected to categorize the alarms as false or real: (i) decision tree (DT), (ii) 5-nearest neighbors (5-NN), (iii) naïve Bayes (NB) and (iv) support vector machine (SVM). We acquired and processed monitoring data (median duration (SD): 54.6 (± 6.9) min) of 14 preterm infants (gestational age: 26 6/7 (± 2 5/7) weeks). A hybrid method of filter and wrapper feature selection generated the candidate subset for training these four MLAs.
RESULTS
A high specificity of >99% was achieved by all four approaches. DT showed the highest sensitivity (87%). The cerebral oximetry data improved the classification accuracy.
DISCUSSION & CONCLUSION
Despite a (as yet) low amount of data for training, the four MLAs achieved an excellent specificity and a promising sensitivity. Presently, the current sensitivity is insufficient since, in the NICU, it is crucial that no real alarms are missed. This will most likely be improved by including more subjects and data in the training of the MLAs, which makes pursuing this approach worthwhile
- …