52,553 research outputs found
Learning Large-Scale Bayesian Networks with the sparsebn Package
Learning graphical models from data is an important problem with wide
applications, ranging from genomics to the social sciences. Nowadays datasets
often have upwards of thousands---sometimes tens or hundreds of thousands---of
variables and far fewer samples. To meet this challenge, we have developed a
new R package called sparsebn for learning the structure of large, sparse
graphical models with a focus on Bayesian networks. While there are many
existing software packages for this task, this package focuses on the unique
setting of learning large networks from high-dimensional data, possibly with
interventions. As such, the methods provided place a premium on scalability and
consistency in a high-dimensional setting. Furthermore, in the presence of
interventions, the methods implemented here achieve the goal of learning a
causal network from data. Additionally, the sparsebn package is fully
compatible with existing software packages for network analysis.Comment: To appear in the Journal of Statistical Software, 39 pages, 7 figure
Understanding Complex Systems: From Networks to Optimal Higher-Order Models
To better understand the structure and function of complex systems,
researchers often represent direct interactions between components in complex
systems with networks, assuming that indirect influence between distant
components can be modelled by paths. Such network models assume that actual
paths are memoryless. That is, the way a path continues as it passes through a
node does not depend on where it came from. Recent studies of data on actual
paths in complex systems question this assumption and instead indicate that
memory in paths does have considerable impact on central methods in network
science. A growing research community working with so-called higher-order
network models addresses this issue, seeking to take advantage of information
that conventional network representations disregard. Here we summarise the
progress in this area and outline remaining challenges calling for more
research.Comment: 8 pages, 4 figure
A Biologically Informed Hylomorphism
Although contemporary metaphysics has recently undergone a neo-Aristotelian revival wherein dispositions, or capacities are now commonplace in empirically grounded ontologies, being routinely utilised in theories of causality and modality, a central Aristotelian concept has yet to be given serious attention – the doctrine of hylomorphism. The reason for this is clear: while the Aristotelian ontological distinction between actuality and potentiality has proven to be a fruitful conceptual framework with which to model the operation of the natural world, the distinction between form and matter has yet to similarly earn its keep. In this chapter, I offer a first step toward showing that the hylomorphic framework is up to that task. To do so, I return to the birthplace of that doctrine - the biological realm. Utilising recent advances in developmental biology, I argue that the hylomorphic framework is an empirically adequate and conceptually rich explanatory schema with which to model the nature of organism
Stable and unstable attractors in Boolean networks
Boolean networks at the critical point have been a matter of debate for many
years as, e.g., scaling of number of attractor with system size. Recently it
was found that this number scales superpolynomially with system size, contrary
to a common earlier expectation of sublinear scaling. We here point to the fact
that these results are obtained using deterministic parallel update, where a
large fraction of attractors in fact are an artifact of the updating scheme.
This limits the significance of these results for biological systems where
noise is omnipresent. We here take a fresh look at attractors in Boolean
networks with the original motivation of simplified models for biological
systems in mind. We test stability of attractors w.r.t. infinitesimal
deviations from synchronous update and find that most attractors found under
parallel update are artifacts arising from the synchronous clocking mode. The
remaining fraction of attractors are stable against fluctuating response
delays. For this subset of stable attractors we observe sublinear scaling of
the number of attractors with system size.Comment: extended version, additional figur
- …