8,710 research outputs found
Quantum causal models, faithfulness and retrocausality
Wood and Spekkens (2015) argue that any causal model explaining the EPRB
correlations and satisfying no-signalling must also violate the assumption that
the model faithfully reproduces the statistical dependences and
independences---a so-called "fine-tuning" of the causal parameters; this
includes, in particular, retrocausal explanations of the EPRB correlations. I
consider this analysis with a view to enumerating the possible responses an
advocate of retrocausal explanations might propose. I focus on the response of
N\"{a}ger (2015), who argues that the central ideas of causal explanations can
be saved if one accepts the possibility of a stable fine-tuning of the causal
parameters. I argue that, in light of this view, a violation of faithfulness
does not necessarily rule out retrocausal explanations of the EPRB
correlations, although it certainly constrains such explanations. I conclude by
considering some possible consequences of this type of response for retrocausal
explanations
A quantum causal discovery algorithm
Finding a causal model for a set of classical variables is now a
well-established task---but what about the quantum equivalent? Even the notion
of a quantum causal model is controversial. Here, we present a causal discovery
algorithm for quantum systems. The input to the algorithm is a process matrix
describing correlations between quantum events. Its output consists of
different levels of information about the underlying causal model. Our
algorithm determines whether the process is causally ordered by grouping the
events into causally-ordered non-signaling sets. It detects if all relevant
common causes are included in the process, which we label Markovian, or
alternatively if some causal relations are mediated through some external
memory. For a Markovian process, it outputs a causal model, namely the causal
relations and the corresponding mechanisms, represented as quantum states and
channels. Our algorithm provides a first step towards more general methods for
quantum causal discovery.Comment: 11 pages, 10 figures, revised to match published versio
Who Learns Better Bayesian Network Structures: Accuracy and Speed of Structure Learning Algorithms
Three classes of algorithms to learn the structure of Bayesian networks from
data are common in the literature: constraint-based algorithms, which use
conditional independence tests to learn the dependence structure of the data;
score-based algorithms, which use goodness-of-fit scores as objective functions
to maximise; and hybrid algorithms that combine both approaches.
Constraint-based and score-based algorithms have been shown to learn the same
structures when conditional independence and goodness of fit are both assessed
using entropy and the topological ordering of the network is known (Cowell,
2001).
In this paper, we investigate how these three classes of algorithms perform
outside the assumptions above in terms of speed and accuracy of network
reconstruction for both discrete and Gaussian Bayesian networks. We approach
this question by recognising that structure learning is defined by the
combination of a statistical criterion and an algorithm that determines how the
criterion is applied to the data. Removing the confounding effect of different
choices for the statistical criterion, we find using both simulated and
real-world complex data that constraint-based algorithms are often less
accurate than score-based algorithms, but are seldom faster (even at large
sample sizes); and that hybrid algorithms are neither faster nor more accurate
than constraint-based algorithms. This suggests that commonly held beliefs on
structure learning in the literature are strongly influenced by the choice of
particular statistical criteria rather than just by the properties of the
algorithms themselves.Comment: 27 pages, 8 figure
- …