967 research outputs found
Inverse Ising inference with correlated samples
Correlations between two variables of a high-dimensional system can be
indicative of an underlying interaction, but can also result from indirect
effects. Inverse Ising inference is a method to distinguish one from the other.
Essentially, the parameters of the least constrained statistical model are
learned from the observed correlations such that direct interactions can be
separated from indirect correlations. Among many other applications, this
approach has been helpful for protein structure prediction, because residues
which interact in the 3D structure often show correlated substitutions in a
multiple sequence alignment. In this context, samples used for inference are
not independent but share an evolutionary history on a phylogenetic tree. Here,
we discuss the effects of correlations between samples on global inference.
Such correlations could arise due to phylogeny but also via other slow
dynamical processes. We present a simple analytical model to address the
resulting inference biases, and develop an exact method accounting for
background correlations in alignment data by combining phylogenetic modeling
with an adaptive cluster expansion algorithm. We find that popular reweighting
schemes are only marginally effective at removing phylogenetic bias, suggest a
rescaling strategy that yields better results, and provide evidence that our
conclusions carry over to the frequently used mean-field approach to the
inverse Ising problem.Comment: 18 pages, 6 figures; accepted at New J Phy
Complex Networks from Classical to Quantum
Recent progress in applying complex network theory to problems in quantum
information has resulted in a beneficial crossover. Complex network methods
have successfully been applied to transport and entanglement models while
information physics is setting the stage for a theory of complex systems with
quantum information-inspired methods. Novel quantum induced effects have been
predicted in random graphs---where edges represent entangled links---and
quantum computer algorithms have been proposed to offer enhancement for several
network problems. Here we review the results at the cutting edge, pinpointing
the similarities and the differences found at the intersection of these two
fields.Comment: 12 pages, 4 figures, REVTeX 4-1, accepted versio
Sparse Randomized Shortest Paths Routing with Tsallis Divergence Regularization
This work elaborates on the important problem of (1) designing optimal
randomized routing policies for reaching a target node t from a source note s
on a weighted directed graph G and (2) defining distance measures between nodes
interpolating between the least cost (based on optimal movements) and the
commute-cost (based on a random walk on G), depending on a temperature
parameter T. To this end, the randomized shortest path formalism (RSP,
[2,99,124]) is rephrased in terms of Tsallis divergence regularization, instead
of Kullback-Leibler divergence. The main consequence of this change is that the
resulting routing policy (local transition probabilities) becomes sparser when
T decreases, therefore inducing a sparse random walk on G converging to the
least-cost directed acyclic graph when T tends to 0. Experimental comparisons
on node clustering and semi-supervised classification tasks show that the
derived dissimilarity measures based on expected routing costs provide
state-of-the-art results. The sparse RSP is therefore a promising model of
movements on a graph, balancing sparse exploitation and exploration in an
optimal way
Color Textured Image Segmentation Based on Spatial Dependence Using 3D Co-occurrence Matrices and Markov Random Fields
Image segmentation is a primary step in many computer vision tasks. Although many segmentation methods based on either
color or texture have been proposed in the last decades, there have been only few approaches combining both these features.
This work presents a new image segmentation method using color texture features extracted from 3D co-occurrence matrices
combined with spatial dependence, this modeled by a Markov random field. The 3D co-occurrence matrices provide features
which summarize statistical interaction both between pixels and different color bands, which is not usually accomplished by
other segmentation methods. After a preliminary segmentation of the image into homogeneous regions, the ICM method is
applied only to pixels located in the boundaries between regions, providing a fine segmentation with a reduced computational
cost, since a small portion of the image is considered in the last stage. A set of synthetic and natural color images is used to
show the results by applying the proposed method
Implementing Bayesian Inference with Neural Networks
Embodied agents, be they animals or robots, acquire information about the world through their senses. Embodied agents, however, do not simply lose this information once it passes by, but rather process and store it for future use. The most general theory of how an agent can combine stored knowledge with new observations is Bayesian inference. In this dissertation I present a theory of how embodied agents can learn to implement Bayesian inference with neural networks.
By neural network I mean both artificial and biological neural networks, and in my dissertation I address both kinds. On one hand, I develop theory for implementing Bayesian inference in deep generative models, and I show how to train multilayer perceptrons to compute approximate predictions for Bayesian filtering. On the other hand, I show that several models in computational neuroscience are special cases of the general theory that I develop in this dissertation, and I use this theory to model and explain several phenomena in neuroscience. The key contributions of this dissertation can be summarized as follows:
- I develop a class of graphical model called nth-order harmoniums. An nth-order harmonium is an n-tuple of random variables, where the conditional distribution of each variable given all the others is always an element of the same exponential family. I show that harmoniums have a recursive structure which allows them to be analyzed at coarser and finer levels of detail.
- I define a class of harmoniums called rectified harmoniums, which are constrained to have priors which are conjugate to their posteriors. As a consequence of this, rectified harmoniums afford efficient sampling and learning.
- I develop deep harmoniums, which are harmoniums which can be represented by hierarchical, undirected graphs. I develop the theory of rectification for deep harmoniums, and develop a novel algorithm for training deep generative models.
- I show how to implement a variety of optimal and near-optimal Bayes filters by combining the solution to Bayes' rule provided by rectified harmoniums, with predictions computed by a recurrent neural network. I then show how to train a neural network to implement Bayesian filtering when the transition and emission distributions are unknown.
- I show how some well-established models of neural activity are special cases of the theory I present in this dissertation, and how these models can be generalized with the theory of rectification.
- I show how the theory that I present can model several neural phenomena including proprioception and gain-field modulation of tuning curves.
- I introduce a library for the programming language Haskell, within which I have implemented all the simulations presented in this dissertation. This library uses concepts from Riemannian geometry to provide a rigorous and efficient environment for implementing complex numerical simulations.
I also use the results presented in this dissertation to argue for the fundamental role of neural computation in embodied cognition. I argue, in other words, that before we will be able to build truly intelligent robots, we will need to truly understand biological brains
Facial soft tissue segmentation
The importance of the face for socio-ecological interaction is the cause for a high demand on any surgical intervention on the facial musculo-skeletal system. Bones and soft-tissues are of major importance for any facial surgical treatment to guarantee an optimal, functional and aesthetical result. For this reason, surgeons want to pre-operatively plan, simulate and predict the outcome of the surgery allowing for shorter operation times and improved quality. Accurate simulation requires exact segmentation knowledge of the facial tissues. Thus semi-automatic segmentation techniques are required.
This thesis proposes semi-automatic methods for segmentation of the facial soft-tissues, such as muscles, skin and fat, from CT and MRI datasets, using a Markov Random Fields (MRF) framework. Due to image noise, artifacts, weak edges and multiple objects of similar appearance in close proximity, it is difficult to segment the object of interest by using image information alone. Segmentations would leak at weak edges into neighboring structures that have a similar intensity profile. To overcome this problem, additional shape knowledge is incorporated in the energy function which can then be minimized using Graph-Cuts (GC). Incremental approaches by incorporating additional prior shape knowledge are presented. The proposed approaches are not object specific and can be applied to segment any class of objects be that anatomical or non-anatomical from medical or non-medical image datasets, whenever a statistical model is present.
In the first approach a 3D mean shape template is used as shape prior, which is integrated into the MRF based energy function. Here, the shape knowledge is encoded into the data and the smoothness terms of the energy function that constrains the segmented parts to a reasonable shape.
In the second approach, to improve handling of shape variations naturally found in the population, the fixed shape template is replaced by a more robust 3D statistical shape model based on Probabilistic Principal Component Analysis (PPCA). The advantages of using the Probabilistic PCA are that it allows reconstructing the optimal shape and computing the remaining variance of the statistical model from partial information. By using an iterative method, the statistical shape model is then refined using image based cues to get a better fitting of the statistical model to the patient's muscle anatomy. These image cues are based on the segmented muscle, edge information and intensity likelihood of the muscle. Here, a linear shape update mechanism is used to fit the statistical model to the image based cues.
In the third approach, the shape refinement step is further improved by using a non-linear shape update mechanism where vertices of the 3D mesh of the statistical model incur the non-linear penalty depending on the remaining variability of the vertex. The non-linear shape update mechanism provides a more accurate shape update and helps in a finer shape fitting of the statistical model to the image based cues in areas where the shape variability is high.
Finally, a unified approach is presented to segment the relevant facial muscles and the remaining facial soft-tissues (skin and fat). One soft-tissue layer is removed at a time such as the head and non-head regions followed by the skin. In the next step, bones are removed from the dataset, followed by the separation of the brain and non-brain regions as well as the removal of air cavities. Afterwards, facial fat is segmented using the standard Graph-Cuts approach. After separating the important anatomical structures, finally, a 3D fixed shape template mesh of the facial muscles is used to segment the relevant facial muscles.
The proposed methods are tested on the challenging example of segmenting the masseter muscle. The datasets were noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. dental fillings and dental implants. Qualitative and quantitative experimental results show that by incorporating prior shape knowledge leaking can be effectively constrained to obtain better segmentation results
- …