55,384 research outputs found
The future of artificial intelligence in intensive care: moving from predictive to actionable AI
Artificial intelligence (AI) research in the intensive care unit (ICU) mainly focuses on developing models (from linear regression to deep learning) to predict outcomes, such as mortality or sepsis [1, 2]. However, there is another important aspect of AI that is typically not framed as AI (although it may be more worthy of the name), which is the prediction of patient outcomes or events that would result from different actions, known as causal inference [3, 4]. This aspect of AI is crucial for decision-making in the ICU. To emphasize the importance of causal inference, we propose to refer to any data-driven model used for causal inference tasks as ‘actionable AI’, as opposed to ‘predictive AI’, and discuss how these models could provide meaningful decision support in the ICU
Ancestral Causal Inference
Constraint-based causal discovery from limited data is a notoriously
difficult challenge due to the many borderline independence test decisions.
Several approaches to improve the reliability of the predictions by exploiting
redundancy in the independence information have been proposed recently. Though
promising, existing approaches can still be greatly improved in terms of
accuracy and scalability. We present a novel method that reduces the
combinatorial explosion of the search space by using a more coarse-grained
representation of causal information, drastically reducing computation time.
Additionally, we propose a method to score causal predictions based on their
confidence. Crucially, our implementation also allows one to easily combine
observational and interventional data and to incorporate various types of
available background knowledge. We prove soundness and asymptotic consistency
of our method and demonstrate that it can outperform the state-of-the-art on
synthetic data, achieving a speedup of several orders of magnitude. We
illustrate its practical feasibility by applying it on a challenging protein
data set.Comment: In Proceedings of Advances in Neural Information Processing Systems
29 (NIPS 2016
Justifying Information-Geometric Causal Inference
Information Geometric Causal Inference (IGCI) is a new approach to
distinguish between cause and effect for two variables. It is based on an
independence assumption between input distribution and causal mechanism that
can be phrased in terms of orthogonality in information space. We describe two
intuitive reinterpretations of this approach that makes IGCI more accessible to
a broader audience.
Moreover, we show that the described independence is related to the
hypothesis that unsupervised learning and semi-supervised learning only works
for predicting the cause from the effect and not vice versa.Comment: 3 Figure
- …