97,268 research outputs found

    Learning Generative Models with Visual Attention

    Full text link
    Attention has long been proposed by psychologists as important for effectively dealing with the enormous sensory stimulus available in the neocortex. Inspired by the visual attention models in computational neuroscience and the need of object-centric data for generative models, we describe for generative learning framework using attentional mechanisms. Attentional mechanisms can propagate signals from region of interest in a scene to an aligned canonical representation, where generative modeling takes place. By ignoring background clutter, generative models can concentrate their resources on the object of interest. Our model is a proper graphical model where the 2D Similarity transformation is a part of the top-down process. A ConvNet is employed to provide good initializations during posterior inference which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our model can robustly attend to face regions of novel test subjects. More importantly, our model can learn generative models of new faces from a novel dataset of large images where the face locations are not known.Comment: In the proceedings of Neural Information Processing Systems, 201

    Bayesian analysis of ranking data with the constrained Extended Plackett-Luce model

    Get PDF
    Multistage ranking models, including the popular Plackett-Luce distribution (PL), rely on the assumption that the ranking process is performed sequentially, by assigning the positions from the top to the bottom one (forward order). A recent contribution to the ranking literature relaxed this assumption with the addition of the discrete-valued reference order parameter, yielding the novel Extended Plackett-Luce model (EPL). Inference on the EPL and its generalization into a finite mixture framework was originally addressed from the frequentist perspective. In this work, we propose the Bayesian estimation of the EPL with order constraints on the reference order parameter. The proposed restrictions reflect a meaningful rank assignment process. By combining the restrictions with the data augmentation strategy and the conjugacy of the Gamma prior distribution with the EPL, we facilitate the construction of a tuned joint Metropolis-Hastings algorithm within Gibbs sampling to simulate from the posterior distribution. The Bayesian approach allows to address more efficiently the inference on the additional discrete-valued parameter and the assessment of its estimation uncertainty. The usefulness of the proposal is illustrated with applications to simulated and real datasets.Comment: 20 pages, 4 figures, 4 tables. arXiv admin note: substantial text overlap with arXiv:1803.0288

    An overview of the VRS virtual platform

    Get PDF
    This paper provides an overview of the development of the virtual platform within the European Commission funded VRShips-ROPAX (VRS) project. This project is a major collaboration of approximately 40 industrial, regulatory, consultancy and academic partners with the objective of producing two novel platforms. A physical platform will be designed and produced representing a scale model of a novel ROPAX vessel with the following criteria: 2000 passengers; 400 cabins; 2000 nautical mile range, and a service speed of 38 knots. The aim of the virtual platform is to demonstrate that vessels may be designed to meet these criteria, which was not previously possible using individual tools and conventional design approaches. To achieve this objective requires the integration of design and simulation tools representing concept, embodiment, detail, production, and operation life-phases into the virtual platform, to enable distributed design activity to be undertaken. The main objectives for the development of the virtual platform are described, followed by the discussion of the techniques chosen to address the objectives, and finally a description of a use-case for the platform. Whilst the focus of the VRS virtual platform was to facilitate the design of ROPAX vessels, the components within the platform are entirely generic and may be applied to the distributed design of any type of vessel, or other complex made-to-order products

    String and Membrane Gaussian Processes

    Full text link
    In this paper we introduce a novel framework for making exact nonparametric Bayesian inference on latent functions, that is particularly suitable for Big Data tasks. Firstly, we introduce a class of stochastic processes we refer to as string Gaussian processes (string GPs), which are not to be mistaken for Gaussian processes operating on text. We construct string GPs so that their finite-dimensional marginals exhibit suitable local conditional independence structures, which allow for scalable, distributed, and flexible nonparametric Bayesian inference, without resorting to approximations, and while ensuring some mild global regularity constraints. Furthermore, string GP priors naturally cope with heterogeneous input data, and the gradient of the learned latent function is readily available for explanatory analysis. Secondly, we provide some theoretical results relating our approach to the standard GP paradigm. In particular, we prove that some string GPs are Gaussian processes, which provides a complementary global perspective on our framework. Finally, we derive a scalable and distributed MCMC scheme for supervised learning tasks under string GP priors. The proposed MCMC scheme has computational time complexity O(N)\mathcal{O}(N) and memory requirement O(dN)\mathcal{O}(dN), where NN is the data size and dd the dimension of the input space. We illustrate the efficacy of the proposed approach on several synthetic and real-world datasets, including a dataset with 66 millions input points and 88 attributes.Comment: To appear in the Journal of Machine Learning Research (JMLR), Volume 1

    Functional Linear Mixed Models for Irregularly or Sparsely Sampled Data

    Get PDF
    We propose an estimation approach to analyse correlated functional data which are observed on unequal grids or even sparsely. The model we use is a functional linear mixed model, a functional analogue of the linear mixed model. Estimation is based on dimension reduction via functional principal component analysis and on mixed model methodology. Our procedure allows the decomposition of the variability in the data as well as the estimation of mean effects of interest and borrows strength across curves. Confidence bands for mean effects can be constructed conditional on estimated principal components. We provide R-code implementing our approach. The method is motivated by and applied to data from speech production research

    Efficient computational strategies to learn the structure of probabilistic graphical models of cumulative phenomena

    Full text link
    Structural learning of Bayesian Networks (BNs) is a NP-hard problem, which is further complicated by many theoretical issues, such as the I-equivalence among different structures. In this work, we focus on a specific subclass of BNs, named Suppes-Bayes Causal Networks (SBCNs), which include specific structural constraints based on Suppes' probabilistic causation to efficiently model cumulative phenomena. Here we compare the performance, via extensive simulations, of various state-of-the-art search strategies, such as local search techniques and Genetic Algorithms, as well as of distinct regularization methods. The assessment is performed on a large number of simulated datasets from topologies with distinct levels of complexity, various sample size and different rates of errors in the data. Among the main results, we show that the introduction of Suppes' constraints dramatically improve the inference accuracy, by reducing the solution space and providing a temporal ordering on the variables. We also report on trade-offs among different search techniques that can be efficiently employed in distinct experimental settings. This manuscript is an extended version of the paper "Structural Learning of Probabilistic Graphical Models of Cumulative Phenomena" presented at the 2018 International Conference on Computational Science
    • …
    corecore