8,332 research outputs found
Learning and Designing Stochastic Processes from Logical Constraints
Stochastic processes offer a flexible mathematical formalism to model and
reason about systems. Most analysis tools, however, start from the premises
that models are fully specified, so that any parameters controlling the
system's dynamics must be known exactly. As this is seldom the case, many
methods have been devised over the last decade to infer (learn) such parameters
from observations of the state of the system. In this paper, we depart from
this approach by assuming that our observations are {\it qualitative}
properties encoded as satisfaction of linear temporal logic formulae, as
opposed to quantitative observations of the state of the system. An important
feature of this approach is that it unifies naturally the system identification
and the system design problems, where the properties, instead of observations,
represent requirements to be satisfied. We develop a principled statistical
estimation procedure based on maximising the likelihood of the system's
parameters, using recent ideas from statistical machine learning. We
demonstrate the efficacy and broad applicability of our method on a range of
simple but non-trivial examples, including rumour spreading in social networks
and hybrid models of gene regulation
Learning Adjustment Sets from Observational and Limited Experimental Data
Estimating causal effects from observational data is not always possible due
to confounding. Identifying a set of appropriate covariates (adjustment set)
and adjusting for their influence can remove confounding bias; however, such a
set is typically not identifiable from observational data alone. Experimental
data do not have confounding bias, but are typically limited in sample size and
can therefore yield imprecise estimates. Furthermore, experimental data often
include a limited set of covariates, and therefore provide limited insight into
the causal structure of the underlying system. In this work we introduce a
method that combines large observational and limited experimental data to
identify adjustment sets and improve the estimation of causal effects. The
method identifies an adjustment set (if possible) by calculating the marginal
likelihood for the experimental data given observationally-derived prior
probabilities of potential adjustmen sets. In this way, the method can make
inferences that are not possible using only the conditional dependencies and
independencies in all the observational and experimental data. We show that the
method successfully identifies adjustment sets and improves causal effect
estimation in simulated data, and it can sometimes make additional inferences
when compared to state-of-the-art methods for combining experimental and
observational data.Comment: 10 pages, 5 figure
bayesvl: Visually Learning the Graphical Structure of Bayesian Networks and Performing MCMC with 'Stan'
The 'bayesvl' R Packag
- …