255 research outputs found
Parallel universes to improve the diagnosis of cardiac arrhythmias
We are interested in using parallel universes to learn interpretable
models that can be subsequently used to automatically diagnose
cardiac arrythmias. In our study, parallel universes are
heterogeneous sources such as electrocardiograms, blood pressure
measurements, phonocardiograms etc. that give relevant information
about the cardiac state of a patient. To learn interpretable rules,
we use an inductive logic programming (ILP) method on a symbolic
version of our data. Aggregating the symbolic data coming from all
the sources before learning, increases both the number of possible
relations that can be learned and the richness of the language. We
propose a two-step strategy to deal with these dimensionality
problems when using ILP. First, rules are learned independently in
each universe. Second, the learned rules are used to bias a new
learning process from the aggregated data. The results show that
this method is much more efficient than learning directly from the
aggregated data. Furthermore the good accuracy results confirm the
benefits of using multiple sources when trying to improve the
diagnosis of cardiac arrythmias
Learning rules from multisource data for cardiac monitoring
International audienceThis paper formalises the concept of learning symbolic rules from multisource data in a cardiac monitoring context. Our sources, electrocardiograms and arterial blood pressure measures, describe cardiac behaviours from different viewpoints. To learn interpretable rules, we use an Inductive Logic Programming (ILP) method. We develop an original strategy to cope with the dimensionality issues caused by using this ILP technique on a rich multisource language. The results show that our method greatly improves the feasibility and the efficiency of the process while staying accurate. They also confirm the benefits of using multiple sources to improve the diagnosis of cardiac arrhythmias
Arguments using ontological and causal knowledge (JIAF 2013)
National audienceWe investigate an approach to reasoning about causes through argumentation. We consider a causal model for a physical system, and look for arguments about facts. Some arguments are meant to provide explanations of facts whereas some challenge these explanations and so on. At the root of argumentation here, are causal links ({A_1, ... ,A_n} causes B) and ontological links (o_1 is_a o_2). We present a system that provides a candidate explanation ({A_1, ... ,A_n} explains {B_1, ... ,B_m}) by resorting to an underlying causal link substantiated with appropriate ontological links. Argumentation is then at work from these various explaining links. A case study is developed: a severe storm Xynthia that devastated part of France in 2010, with an unaccountably high number of casualties.Nous décrivons l'utilisation d'un systéme logique de raisonnement á partir de données causales et ontologiques dans un cadre argumentatif. Les données consistent en liens causaux ({{A_1,...,A_n} cause B) et ontologiques (o_1 est_un} o_2). Le système en déduit des liens explicatifs possibles ({A_1, ... ,A_n} explique {B_1, ... ,B_m}). Ces liens explicatifs servent ensuite de base á un système argumentatif qui fournit des explications possibles. Un exemple inspiré de la tempête Xynthia, laquelle a provoqué un trop grand nombre de victimes par rapport aux conditions purement météorologiques, illustre une utilisation de notre système
Model-Checking an Ecosystem Model for Decision-Aid
International audience—This work stems on the idea that timed automata models and model-checking techniques may bring much in a decision-aid context when dealing with large and interacting qualitative models. In this paper, we focus on two key issues when facing the interpretation and explanation of behavior in real-world systems: the model building and its exploration using logic patterns. We illustrate this approach in the ecological domain with the modeling and exploration of a fisheries ecosystem
Recommended from our members
From Classification Rules to Action Recommendations
Rule induction has attracted a great deal of attention in Machine Learning and Data Mining. However, generating rules is not an end in itself because their applicability is not straightforward especially when the number of rules is large. Ideally, the user would ultimately like to use these rules to decide which actions to take. In the literature, this notion is usually referred to as actionability. The contribution of this paper1 is two-fold: first we propose a survey of the main approaches developed to address actionability. This topic has received growing attention in the past years. We present a classification of the main research in this area as well as a comparative study between the different approaches. Second, we propose a new framework to address actionability. Our goal is to lighten the burden of analyzing a large set of classification rules when the user is confronted with an "unsatisfactory situation" and needs help to decide what appropriate actions to take in order to remedy the situation. The method consists in comparing the situation to a set of classification rules. This is achieved by using a suitable distance that allows one to suggest action recommendations requiring minimal changes to improve the situation. We propose the algorithm DAKAR for learning action recommendations and we present an application to environment protection. Our experiment shows the usefulness of our contribution for action recommendation but also raises some concerns about the impact of the redundancy of a set of rules in learning action recommendations of good quality
Self-adaptive web intrusion detection system
The evolution of the web server contents and the emergence of new kinds of
intrusions make necessary the adaptation of the intrusion detection systems
(IDS). Nowadays, the adaptation of the IDS requires manual -- tedious and
unreactive -- actions from system administrators. In this paper, we present a
self-adaptive intrusion detection system which relies on a set of local
model-based diagnosers. The redundancy of diagnoses is exploited, online, by a
meta-diagnoser to check the consistency of computed partial diagnoses, and to
trigger the adaptation of defective diagnoser models (or signatures) in case of
inconsistency. This system is applied to the intrusion detection from a stream
of HTTP requests. Our results show that our system 1) detects intrusion
occurrences sensitively and precisely, 2) accurately self-adapts diagnoser
model, thus improving its detection accuracy
Exploiting independence in a decentralised and incremental approach of diagnosis
It is now well-known that the size of the model is the bottleneck when using model-based approaches to diagnose complex systems. To answer this problem, decentralized/distributed approaches have been proposed. The global system model is described through its component models as a set of automata and the global diagnosis is computed from the component diagnoses (also called local diagnoses). Another problem, which is far less considered, is the size of the diagnosis itself. However, it can also be huge enough, especially when dealing with uncertain observations. It is why we recently proposed to slice the observation flow into temporal windows and to compute the diagnosis in an incremental way from these diagnosis slices. In this context, we define in this paper two independence properties (transition and state independence) and we show their relevance to get a tractable representation of diagnosis. To illustrate the impact on the diagnosis size, experimental results on a toy example are given
- …