268 research outputs found

    Structured Bayesian Approximate Inference

    Get PDF

    Shape and Topology Constrained Image Segmentation with Stochastic Models

    Get PDF
    The central theme of this thesis has been to develop robust algorithms for the task of image segmentation. All segmentation techniques that have been proposed in this thesis are based on the sound modeling of the image formation process. This approach to image partition enables the derivation of objective functions, which make all modeling assumptions explicit. Based on the Parametric Distributional Clustering (PDC) technique, improved variants have been derived, which explicitly incorporate topological assumptions in the corresponding cost functions. In this thesis, the questions of robustness and generalizability of segmentation solutions have been addressed in an empirical manner, giving comprehensive example sets for both problems. It has been shown, that the PDC framework is indeed capable of producing highly robust image partitions. In the context of PDC-based segmentation, a probabilistic representation of shape has been constructed. Furthermore, likelihood maps for given objects of interest were derived from the PDC cost function. Interpreting the shape information as a prior for the segmentation task, it has been combined with the likelihoods in a Bayesian setting. The resulting posterior probability for the occurrence of an object of a specified semantic category has been demonstrated to achieve excellent segmentation quality on very hard testbeds of images from the Corel gallery

    Bayesian Gaussian Process Models: PAC-Bayesian Generalisation Error Bounds and Sparse Approximations

    Get PDF
    Non-parametric models and techniques enjoy a growing popularity in the field of machine learning, and among these Bayesian inference for Gaussian process (GP) models has recently received significant attention. We feel that GP priors should be part of the standard toolbox for constructing models relevant to machine learning in the same way as parametric linear models are, and the results in this thesis help to remove some obstacles on the way towards this goal. In the first main chapter, we provide a distribution-free finite sample bound on the difference between generalisation and empirical (training) error for GP classification methods. While the general theorem (the PAC-Bayesian bound) is not new, we give a much simplified and somewhat generalised derivation and point out the underlying core technique (convex duality) explicitly. Furthermore, the application to GP models is novel (to our knowledge). A central feature of this bound is that its quality depends crucially on task knowledge being encoded faithfully in the model and prior distributions, so there is a mutual benefit between a sharp theoretical guarantee and empirically well-established statistical practices. Extensive simulations on real-world classification tasks indicate an impressive tightness of the bound, in spite of the fact that many previous bounds for related kernel machines fail to give non-trivial guarantees in this practically relevant regime. In the second main chapter, sparse approximations are developed to address the problem of the unfavourable scaling of most GP techniques with large training sets. Due to its high importance in practice, this problem has received a lot of attention recently. We demonstrate the tractability and usefulness of simple greedy forward selection with information-theoretic criteria previously used in active learning (or sequential design) and develop generic schemes for automatic model selection with many (hyper)parameters. We suggest two new generic schemes and evaluate some of their variants on large real-world classification and regression tasks. These schemes and their underlying principles (which are clearly stated and analysed) can be applied to obtain sparse approximations for a wide regime of GP models far beyond the special cases we studied here

    Modules or mean-fields?

    Get PDF
    The segregation of neural processing into distinct streams has been interpreted by some as evidence in favour of a modular view of brain function. This implies a set of specialised 'modules', each of which performs a specific kind of computation in isolation of other brain systems, before sharing the result of this operation with other modules. In light of a modern understanding of stochastic non-equilibrium systems, like the brain, a simpler and more parsimonious explanation presents itself. Formulating the evolution of a non-equilibrium steady state system in terms of its density dynamics reveals that such systems appear on average to perform a gradient ascent on their steady state density. If this steady state implies a sufficiently sparse conditional independency structure, this endorses a mean-field dynamical formulation. This decomposes the density over all states in a system into the product of marginal probabilities for those states. This factorisation lends the system a modular appearance, in the sense that we can interpret the dynamics of each factor independently. However, the argument here is that it is factorisation, as opposed to modularisation, that gives rise to the functional anatomy of the brain or, indeed, any sentient system. In the following, we briefly overview mean-field theory and its applications to stochastic dynamical systems. We then unpack the consequences of this factorisation through simple numerical simulations and highlight the implications for neuronal message passing and the computational architecture of sentience

    A multi-wavelength study of the dwarf galaxies NGC 2915 and NGC 1705 : star formation, gas dynamics and dark matter

    Get PDF
    Includes bibliographical references (p. 233-242).This thesis presents the results of a detailed multi-wavelength study of the nearby blue compact dwarf galaxies NGC 2915 and NGC 1705. The primary data set (nearly 100 hours of on-source data) for each galaxy consists of new observations of the neutral hydrogen (Hi) line obtained with the Australia Telescope Compact Array, October 2006 - May 2007. The stellar disk of NGC 1705 is host to an intense star-bursting core which is rapidly depleting the galaxy's central Hi reservoir. This galaxy can be used to rigorously test theories of star formation. Detailed studies of the distribution and kinematics of the neutral inter- stellar medium (ISM) within each galaxy are carried out. A suite of star formation recipes and models are examined for each galaxy to quantify the relationship between the observed star formation activity and the distribution and kinematics of the ISM

    ANFIS Based Data Rate Prediction For Cognitive Radio

    Get PDF
    Intelligence is needed to keep up with the rapid evolution of wireless communications, especially in terms of managing and allocating the scarce, radio spectrum in the highly varying and disparate modern environments. Cognitive radio systems promise to handle this situation by utilizing intelligent software packages that enrich their transceiver with radio-awareness, adaptability and capability to learn. A cognitive radio system participates in a continuous process, the ‘‘cognition cycle”, during which it adjusts its operating parameters, observes the results and, eventually takes actions, that is to say, decides to operate in a specific radio configuration (i.e., radio access technology, carrier frequency, modulation type, etc.) expecting to move the radio toward some optimized operational state. In such a process, learning mechanisms utilize information from measurements sensed from the environment, gathered experience and stored knowledge and guide in decision making. This thesis introduces and evaluates learning schemes that are based on adaptive neuro-fuzzy inference system (ANFIS) for predicting the capabilities (e.g. data rate) that can be achieved by a specific radio configuration in cognitive radio. First a ANFIS based scheme is proposed. The work reported here is compare previous neural network based learning schemes. Cognitive radio is a intelligent emergent technology, where learning schemes are needed to assist in its functioning. ANFIS based scheme is one of the good learning Artificial intelligence method, that combines best features of neural network and fuzzy logic. Here ANFIS and neural networks methods are able to assist a cognitive radio system to help in selecting the best one radio configuration to operate in. Performance metric like RMSE, prediction accuracy of ANFIS learning has been used as performance index

    Estimating the Amount of Information Conveyed by a Population of Neurons

    Get PDF
    Recent advances in electrophysiological recording technology have allowed for the collection of data from large populations of neurons simultaneously. Yet despite these advances, methods for the estimation of the amount of information conveyed by multiple neurons have been stymied by the “curse of dimensionality”–as the number of included neurons increases, so too does the dimensionality of the data necessary for such measurements, leading to an exponential and, therefore, intractible increase in the amounts of data required for valid measurements. Here we put forth a novel method for the estimation of the amount of information transmitted by the discharge of a large population of neurons, a method which exploits the little-known fact that (under certain constraints) the Fourier coefficients of variables such as neural spike trains follow a Gaussian distribution. This fact enables an accurate measure of information even with limited data. The method, which we call the Fourier Method, is presented in detail, tested for robustness, and its application is demonstrated with both simulated and real spike trains. ii
    corecore