7,448 research outputs found

    Exploring the structure of a real-time, arbitrary neural artistic stylization network

    Full text link
    In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair. We build upon recent work leveraging conditional instance normalization for multi-style transfer networks by learning to predict the conditional instance normalization parameters directly from a style image. The model is successfully trained on a corpus of roughly 80,000 paintings and is able to generalize to paintings previously unobserved. We demonstrate that the learned embedding space is smooth and contains a rich structure and organizes semantic information associated with paintings in an entirely unsupervised manner.Comment: Accepted as an oral presentation at British Machine Vision Conference (BMVC) 201

    Data exploration systems for databases

    Get PDF
    Data exploration systems apply machine learning techniques, multivariate statistical methods, information theory, and database theory to databases to identify significant relationships among the data and summarize information. The result of applying data exploration systems should be a better understanding of the structure of the data and a perspective of the data enabling an analyst to form hypotheses for interpreting the data. This paper argues that data exploration systems need a minimum amount of domain knowledge to guide both the statistical strategy and the interpretation of the resulting patterns discovered by these systems

    A Very Brief Introduction to Machine Learning With Applications to Communication Systems

    Get PDF
    Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modelling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack

    Modeling Relational Data via Latent Factor Blockmodel

    Full text link
    In this paper we address the problem of modeling relational data, which appear in many applications such as social network analysis, recommender systems and bioinformatics. Previous studies either consider latent feature based models but disregarding local structure in the network, or focus exclusively on capturing local structure of objects based on latent blockmodels without coupling with latent characteristics of objects. To combine the benefits of the previous work, we propose a novel model that can simultaneously incorporate the effect of latent features and covariates if any, as well as the effect of latent structure that may exist in the data. To achieve this, we model the relation graph as a function of both latent feature factors and latent cluster memberships of objects to collectively discover globally predictive intrinsic properties of objects and capture latent block structure in the network to improve prediction performance. We also develop an optimization transfer algorithm based on the generalized EM-style strategy to learn the latent factors. We prove the efficacy of our proposed model through the link prediction task and cluster analysis task, and extensive experiments on the synthetic data and several real world datasets suggest that our proposed LFBM model outperforms the other state of the art approaches in the evaluated tasks.Comment: 10 pages, 12 figure

    Is Simple Better? Revisiting Non-linear Matrix Factorization for Learning Incomplete Ratings

    Full text link
    Matrix factorization techniques have been widely used as a method for collaborative filtering for recommender systems. In recent times, different variants of deep learning algorithms have been explored in this setting to improve the task of making a personalized recommendation with user-item interaction data. The idea that the mapping between the latent user or item factors and the original features is highly nonlinear suggest that classical matrix factorization techniques are no longer sufficient. In this paper, we propose a multilayer nonlinear semi-nonnegative matrix factorization method, with the motivation that user-item interactions can be modeled more accurately using a linear combination of non-linear item features. Firstly, we learn latent factors for representations of users and items from the designed multilayer nonlinear Semi-NMF approach using explicit ratings. Secondly, the architecture built is compared with deep-learning algorithms like Restricted Boltzmann Machine and state-of-the-art Deep Matrix factorization techniques. By using both supervised rate prediction task and unsupervised clustering in latent item space, we demonstrate that our proposed approach achieves better generalization ability in prediction as well as comparable representation ability as deep matrix factorization in the clustering task.Comment: version

    Learning Hierarchical Visual Representations in Deep Neural Networks Using Hierarchical Linguistic Labels

    Full text link
    Modern convolutional neural networks (CNNs) are able to achieve human-level object classification accuracy on specific tasks, and currently outperform competing models in explaining complex human visual representations. However, the categorization problem is posed differently for these networks than for humans: the accuracy of these networks is evaluated by their ability to identify single labels assigned to each image. These labels often cut arbitrarily across natural psychological taxonomies (e.g., dogs are separated into breeds, but never jointly categorized as "dogs"), and bias the resulting representations. By contrast, it is common for children to hear both "dog" and "Dalmatian" to describe the same stimulus, helping to group perceptually disparate objects (e.g., breeds) into a common mental class. In this work, we train CNN classifiers with multiple labels for each image that correspond to different levels of abstraction, and use this framework to reproduce classic patterns that appear in human generalization behavior.Comment: 6 pages, 4 figures, 1 table. Accepted as a paper to the 40th Annual Meeting of the Cognitive Science Society (CogSci 2018

    Discriminative conditional restricted Boltzmann machine for discrete choice and latent variable modelling

    Full text link
    Conventional methods of estimating latent behaviour generally use attitudinal questions which are subjective and these survey questions may not always be available. We hypothesize that an alternative approach can be used for latent variable estimation through an undirected graphical models. For instance, non-parametric artificial neural networks. In this study, we explore the use of generative non-parametric modelling methods to estimate latent variables from prior choice distribution without the conventional use of measurement indicators. A restricted Boltzmann machine is used to represent latent behaviour factors by analyzing the relationship information between the observed choices and explanatory variables. The algorithm is adapted for latent behaviour analysis in discrete choice scenario and we use a graphical approach to evaluate and understand the semantic meaning from estimated parameter vector values. We illustrate our methodology on a financial instrument choice dataset and perform statistical analysis on parameter sensitivity and stability. Our findings show that through non-parametric statistical tests, we can extract useful latent information on the behaviour of latent constructs through machine learning methods and present strong and significant influence on the choice process. Furthermore, our modelling framework shows robustness in input variability through sampling and validation

    A Model of Consistent Node Types in Signed Directed Social Networks

    Full text link
    Signed directed social networks, in which the relationships between users can be either positive (indicating relations such as trust) or negative (indicating relations such as distrust), are increasingly common. Thus the interplay between positive and negative relationships in such networks has become an important research topic. Most recent investigations focus upon edge sign inference using structural balance theory or social status theory. Neither of these two theories, however, can explain an observed edge sign well when the two nodes connected by this edge do not share a common neighbor (e.g., common friend). In this paper we develop a novel approach to handle this situation by applying a new model for node types. Initially, we analyze the local node structure in a fully observed signed directed network, inferring underlying node types. The sign of an edge between two nodes must be consistent with their types; this explains edge signs well even when there are no common neighbors. We show, moreover, that our approach can be extended to incorporate directed triads, when they exist, just as in models based upon structural balance or social status theory. We compute Bayesian node types within empirical studies based upon partially observed Wikipedia, Slashdot, and Epinions networks in which the largest network (Epinions) has 119K nodes and 841K edges. Our approach yields better performance than state-of-the-art approaches for these three signed directed networks.Comment: To appear in the IEEE/ACM International Conference on Advances in Social Network Analysis and Mining (ASONAM), 201
    • …
    corecore