7,847 research outputs found
Markov equivalence of marginalized local independence graphs
Symmetric independence relations are often studied using graphical
representations. Ancestral graphs or acyclic directed mixed graphs with
-separation provide classes of symmetric graphical independence models that
are closed under marginalization. Asymmetric independence relations appear
naturally for multivariate stochastic processes, for instance in terms of local
independence. However, no class of graphs representing such asymmetric
independence relations, which is also closed under marginalization, has been
developed. We develop the theory of directed mixed graphs with -separation
and show that this provides a graphical independence model class which is
closed under marginalization and which generalizes previously considered
graphical representations of local independence.
For statistical applications, it is pivotal to characterize graphs that
induce the same independence relations as such a Markov equivalence class of
graphs is the object that is ultimately identifiable from observational data.
Our main result is that for directed mixed graphs with -separation each
Markov equivalence class contains a maximal element which can be constructed
from the independence relations alone. Moreover, we introduce the directed
mixed equivalence graph as the maximal graph with edge markings. This graph
encodes all the information about the edges that is identifiable from the
independence relations, and furthermore it can be computed efficiently from the
maximal graph.Comment: 49 pages (including supplementary material), updated to add examples
and fix typo
Технология бурения бокового ствола S2 скважины №41 Малодушинского месторождения нефти
Probabilistic graphical models are currently one of the most commonly used architectures for modelling and reasoning with uncertainty. The most widely used subclass of these models is directed acyclic graphs, also known as Bayesian networks, which are used in a wide range of applications both in research and industry. Directed acyclic graphs do, however, have a major limitation, which is that only asymmetric relationships, namely cause and effect relationships, can be modelled between their variables. A class of probabilistic graphical models that tries to address this shortcoming is chain graphs, which include two types of edges in the models representing both symmetric and asymmetric relationships between the variables. This allows for a wider range of independence models to be modelled and depending on how the second edge is interpreted, we also have different so-called chain graph interpretations. Although chain graphs were first introduced in the late eighties, most research on probabilistic graphical models naturally started in the least complex subclasses, such as directed acyclic graphs and undirected graphs. The field of chain graphs has therefore been relatively dormant. However, due to the maturity of the research field of probabilistic graphical models and the rise of more data-driven approaches to system modelling, chain graphs have recently received renewed interest in research. In this thesis we provide an introduction to chain graphs where we incorporate the progress made in the field. More specifically, we study the three chain graph interpretations that exist in research in terms of their separation criteria, their possible parametrizations and the intuition behind their edges. In addition to this we also compare the expressivity of the interpretations in terms of representable independence models as well as propose new structure learning algorithms to learn chain graph models from data
Sequences of regressions and their independences
Ordered sequences of univariate or multivariate regressions provide
statistical models for analysing data from randomized, possibly sequential
interventions, from cohort or multi-wave panel studies, but also from
cross-sectional or retrospective studies. Conditional independences are
captured by what we name regression graphs, provided the generated distribution
shares some properties with a joint Gaussian distribution. Regression graphs
extend purely directed, acyclic graphs by two types of undirected graph, one
type for components of joint responses and the other for components of the
context vector variable. We review the special features and the history of
regression graphs, derive criteria to read all implied independences of a
regression graph and prove criteria for Markov equivalence that is to judge
whether two different graphs imply the same set of independence statements.
Knowledge of Markov equivalence provides alternative interpretations of a given
sequence of regressions, is essential for machine learning strategies and
permits to use the simple graphical criteria of regression graphs on graphs for
which the corresponding criteria are in general more complex. Under the known
conditions that a Markov equivalent directed acyclic graph exists for any given
regression graph, we give a polynomial time algorithm to find one such graph.Comment: 43 pages with 17 figures The manuscript is to appear as an invited
discussion paper in the journal TES
Standard imsets for undirected and chain graphical models
We derive standard imsets for undirected graphical models and chain graphical
models. Standard imsets for undirected graphical models are described in terms
of minimal triangulations for maximal prime subgraphs of the undirected graphs.
For describing standard imsets for chain graphical models, we first define a
triangulation of a chain graph. We then use the triangulation to generalize our
results for the undirected graphs to chain graphs.Comment: Published at http://dx.doi.org/10.3150/14-BEJ611 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Concepts and a case study for a flexible class of graphical Markov models
With graphical Markov models, one can investigate complex dependences,
summarize some results of statistical analyses with graphs and use these graphs
to understand implications of well-fitting models. The models have a rich
history and form an area that has been intensively studied and developed in
recent years. We give a brief review of the main concepts and describe in more
detail a flexible subclass of models, called traceable regressions. These are
sequences of joint response regressions for which regression graphs permit one
to trace and thereby understand pathways of dependence. We use these methods to
reanalyze and interpret data from a prospective study of child development, now
known as the Mannheim Study of Children at Risk. The two related primary
features concern cognitive and motor development, at the age of 4.5 and 8 years
of a child. Deficits in these features form a sequence of joint responses.
Several possible risks are assessed at birth of the child and when the child
reached age 3 months and 2 years.Comment: 21 pages, 7 figures, 7 tables; invited, refereed chapter in a boo
Graphs for margins of Bayesian networks
Directed acyclic graph (DAG) models, also called Bayesian networks, impose
conditional independence constraints on a multivariate probability
distribution, and are widely used in probabilistic reasoning, machine learning
and causal inference. If latent variables are included in such a model, then
the set of possible marginal distributions over the remaining (observed)
variables is generally complex, and not represented by any DAG. Larger classes
of mixed graphical models, which use multiple edge types, have been introduced
to overcome this; however, these classes do not represent all the models which
can arise as margins of DAGs. In this paper we show that this is because
ordinary mixed graphs are fundamentally insufficiently rich to capture the
variety of marginal models.
We introduce a new class of hyper-graphs, called mDAGs, and a latent
projection operation to obtain an mDAG from the margin of a DAG. We show that
each distinct marginal of a DAG model is represented by at least one mDAG, and
provide graphical results towards characterizing when two such marginal models
are the same. Finally we show that mDAGs correctly capture the marginal
structure of causally-interpreted DAGs under interventions on the observed
variables
Reasoning about Independence in Probabilistic Models of Relational Data
We extend the theory of d-separation to cases in which data instances are not
independent and identically distributed. We show that applying the rules of
d-separation directly to the structure of probabilistic models of relational
data inaccurately infers conditional independence. We introduce relational
d-separation, a theory for deriving conditional independence facts from
relational models. We provide a new representation, the abstract ground graph,
that enables a sound, complete, and computationally efficient method for
answering d-separation queries about relational models, and we present
empirical results that demonstrate effectiveness.Comment: 61 pages, substantial revisions to formalisms, theory, and related
wor
- …