21,383 research outputs found
Probabilistic Methodology and Techniques for Artefact Conception and Development
The purpose of this paper is to make a state of the art on probabilistic methodology and techniques for artefact conception and development. It is the 8th deliverable of the BIBA (Bayesian Inspired Brain and Artefacts) project. We first present the incompletness problem as the central difficulty that both living creatures and artefacts have to face: how can they perceive, infer, decide and act efficiently with incomplete and uncertain knowledge?. We then introduce a generic probabilistic formalism called Bayesian Programming. This formalism is then used to review the main probabilistic methodology
and techniques. This review is organized in 3 parts: first the probabilistic models from Bayesian networks to Kalman filters and from sensor fusion to CAD systems, second the inference techniques and finally the learning and model acquisition and comparison methodologies. We conclude with the perspectives of the BIBA project as they rise from this state of the art
Deep learning systems as complex networks
Thanks to the availability of large scale digital datasets and massive
amounts of computational power, deep learning algorithms can learn
representations of data by exploiting multiple levels of abstraction. These
machine learning methods have greatly improved the state-of-the-art in many
challenging cognitive tasks, such as visual object recognition, speech
processing, natural language understanding and automatic translation. In
particular, one class of deep learning models, known as deep belief networks,
can discover intricate statistical structure in large data sets in a completely
unsupervised fashion, by learning a generative model of the data using
Hebbian-like learning mechanisms. Although these self-organizing systems can be
conveniently formalized within the framework of statistical mechanics, their
internal functioning remains opaque, because their emergent dynamics cannot be
solved analytically. In this article we propose to study deep belief networks
using techniques commonly employed in the study of complex networks, in order
to gain some insights into the structural and functional properties of the
computational graph resulting from the learning process.Comment: 20 pages, 9 figure
Surface networks
© Copyright CASA, UCL. The desire to understand and exploit the structure of continuous surfaces is common to researchers in a range of disciplines. Few examples of the varied surfaces forming an integral part of modern subjects include terrain, population density, surface atmospheric pressure, physico-chemical surfaces, computer graphics, and metrological surfaces. The focus of the work here is a group of data structures called Surface Networks, which abstract 2-dimensional surfaces by storing only the most important (also called fundamental, critical or surface-specific) points and lines in the surfaces. Surface networks are intelligent and “natural ” data structures because they store a surface as a framework of “surface ” elements unlike the DEM or TIN data structures. This report presents an overview of the previous works and the ideas being developed by the authors of this report. The research on surface networks has fou
Evaluating functional connectivity in alcoholics based on maximal weight matching
EEG-based applications have faced the challenge of
multi-modal integrated analysis problems. In this paper,
a greedy maximal weight matching approach is used to measure the functional connectivity in alcoholics datasets with EEG and EOG signals. The major discovery is that the processing of the repeated and unrepeated stimuli in the γ band in control drinkers is significantly more different than that in alcoholic subjects. However, the EOGs are always stable in the case of visual tasks, except for a weakly wave when subjects make an error response to the stimul
Efficient computational strategies to learn the structure of probabilistic graphical models of cumulative phenomena
Structural learning of Bayesian Networks (BNs) is a NP-hard problem, which is
further complicated by many theoretical issues, such as the I-equivalence among
different structures. In this work, we focus on a specific subclass of BNs,
named Suppes-Bayes Causal Networks (SBCNs), which include specific structural
constraints based on Suppes' probabilistic causation to efficiently model
cumulative phenomena. Here we compare the performance, via extensive
simulations, of various state-of-the-art search strategies, such as local
search techniques and Genetic Algorithms, as well as of distinct regularization
methods. The assessment is performed on a large number of simulated datasets
from topologies with distinct levels of complexity, various sample size and
different rates of errors in the data. Among the main results, we show that the
introduction of Suppes' constraints dramatically improve the inference
accuracy, by reducing the solution space and providing a temporal ordering on
the variables. We also report on trade-offs among different search techniques
that can be efficiently employed in distinct experimental settings. This
manuscript is an extended version of the paper "Structural Learning of
Probabilistic Graphical Models of Cumulative Phenomena" presented at the 2018
International Conference on Computational Science
Measuring edge importance: a quantitative analysis of the stochastic shielding approximation for random processes on graphs
Mathematical models of cellular physiological mechanisms often involve random
walks on graphs representing transitions within networks of functional states.
Schmandt and Gal\'{a}n recently introduced a novel stochastic shielding
approximation as a fast, accurate method for generating approximate sample
paths from a finite state Markov process in which only a subset of states are
observable. For example, in ion channel models, such as the Hodgkin-Huxley or
other conductance based neural models, a nerve cell has a population of ion
channels whose states comprise the nodes of a graph, only some of which allow a
transmembrane current to pass. The stochastic shielding approximation consists
of neglecting fluctuations in the dynamics associated with edges in the graph
not directly affecting the observable states. We consider the problem of
finding the optimal complexity reducing mapping from a stochastic process on a
graph to an approximate process on a smaller sample space, as determined by the
choice of a particular linear measurement functional on the graph. The
partitioning of ion channel states into conducting versus nonconducting states
provides a case in point. In addition to establishing that Schmandt and
Gal\'{a}n's approximation is in fact optimal in a specific sense, we use recent
results from random matrix theory to provide heuristic error estimates for the
accuracy of the stochastic shielding approximation for an ensemble of random
graphs. Moreover, we provide a novel quantitative measure of the contribution
of individual transitions within the reaction graph to the accuracy of the
approximate process.Comment: Added one reference, typos corrected in Equation 6 and Appendix C,
added the assumption that the graph is irreducible to the main theorem
(results unchanged
- …