270,702 research outputs found

    Inducing Language Networks from Continuous Space Word Representations

    Full text link
    Recent advancements in unsupervised feature learning have developed powerful latent representations of words. However, it is still not clear what makes one representation better than another and how we can learn the ideal representation. Understanding the structure of latent spaces attained is key to any future advancement in unsupervised learning. In this work, we introduce a new view of continuous space word representations as language networks. We explore two techniques to create language networks from learned features by inducing them for two popular word representation methods and examining the properties of their resulting networks. We find that the induced networks differ from other methods of creating language networks, and that they contain meaningful community structure.Comment: 14 page

    A Deep Network with Visual Text Composition Behavior

    Full text link
    While natural languages are compositional, how state-of-the-art neural models achieve compositionality is still unclear. We propose a deep network, which not only achieves competitive accuracy for text classification, but also exhibits compositional behavior. That is, while creating hierarchical representations of a piece of text, such as a sentence, the lower layers of the network distribute their layer-specific attention weights to individual words. In contrast, the higher layers compose meaningful phrases and clauses, whose lengths increase as the networks get deeper until fully composing the sentence.Comment: accepted to ACL201

    Co-Regularized Deep Representations for Video Summarization

    Full text link
    Compact keyframe-based video summaries are a popular way of generating viewership on video sharing platforms. Yet, creating relevant and compelling summaries for arbitrarily long videos with a small number of keyframes is a challenging task. We propose a comprehensive keyframe-based summarization framework combining deep convolutional neural networks and restricted Boltzmann machines. An original co-regularization scheme is used to discover meaningful subject-scene associations. The resulting multimodal representations are then used to select highly-relevant keyframes. A comprehensive user study is conducted comparing our proposed method to a variety of schemes, including the summarization currently in use by one of the most popular video sharing websites. The results show that our method consistently outperforms the baseline schemes for any given amount of keyframes both in terms of attractiveness and informativeness. The lead is even more significant for smaller summaries.Comment: Video summarization, deep convolutional neural networks, co-regularized restricted Boltzmann machine

    Exploring Disentanglement with Multilingual and Monolingual VQ-VAE

    Full text link
    This work examines the content and usefulness of disentangled phone and speaker representations from two separately trained VQ-VAE systems: one trained on multilingual data and another trained on monolingual data. We explore the multi- and monolingual models using four small proof-of-concept tasks: copy-synthesis, voice transformation, linguistic code-switching, and content-based privacy masking. From these tasks, we reflect on how disentangled phone and speaker representations can be used to manipulate speech in a meaningful way. Our experiments demonstrate that the VQ representations are suitable for these tasks, including creating new voices by mixing speaker representations together. We also present our novel technique to conceal the content of targeted words within an utterance by manipulating phone VQ codes, while retaining speaker identity and intelligibility of surrounding words. Finally, we discuss recommendations for further increasing the viability of disentangled representations.Comment: Accepted to Speech Synthesis Workshop 2021 (SSW11

    Evaluating the Robustness of Self-Supervised Learning in Medical Imaging

    Full text link
    Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets. While current research focuses on creating novel pretext tasks to learn meaningful and reusable representations for the target task, these efforts obtain marginal performance gains compared to fully-supervised learning. Meanwhile, little attention has been given to study the robustness of networks trained in a self-supervised manner. In this work, we demonstrate that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield consistent results exposing the hidden benefits of self-supervision for learning robust feature representations

    Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems

    Full text link
    In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks. The purpose of a tensorization and quantization is to achieve, via low-rank tensor approximations "super" compression, and meaningful, compact representation of structured data. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions. Keywords: Tensor networks, tensor train (TT) decompositions, matrix product states (MPS), matrix product operators (MPO), basic tensor operations, tensorization, distributed representation od data optimization problems for very large-scale problems: generalized eigenvalue decomposition (GEVD), PCA/SVD, canonical correlation analysis (CCA).Comment: arXiv admin note: text overlap with arXiv:1403.204

    Interactive, tree-based graph visualization

    Get PDF
    We introduce an interactive graph visualization scheme that allows users to explore graphs by viewing them as a sequence of spanning trees, rather than the entire graph all at once. The user determines which spanning trees are displayed by selecting a vertex from the graph to be the root. Our main contributions are a graph drawing algorithm that generates meaningful representations of graphs using extracted spanning trees, and a graph animation algorithm for creating smooth, continuous transitions between graph drawings. We conduct experiments to measure how well our algorithms visualize graphs and compare them to another visualization scheme

    Evolution of Representations. From Basic Life to Self-Representation and Self-Consciousness

    Get PDF
    The notion of representation is at the foundation of cognitive sciences and is used in theories of mind and consciousness. Other notions like ‘embodiment’, 'intentionality‘, 'guidance theory' or ‘biosemantics’ have been associated to the notion of representation to introduce its functional aspect. We would like to propose here that a conception of 'usage related' representation eases its positioning in an evolutionary context, and opens new areas of investigation toward self-representation and self-consciousness. The subject is presented in five parts:Following an overall presentation, the first part introduces a usage related representation as being an information managed by a system submitted to a constraint that has to be satisfied. We consider that such a system can generate a meaningful information by comparing its constraint to a received information (Menant 2003). We define a representation as being made of the received information and of the meaningful information. Such approach allows groundings in and out for the representation relatively to the system. The second part introduces the two types of representations we want to focus on for living organisms: representations of conspecifics and auto-representation, the latter being defined without using a notion of self-representation. Both types of representations have existed for our pre-human ancestors which can be compared to today great apes.In the third part, we use the performance of intersubjectivity as identified in group life with the presence of mirror neurons in the organisms. Mirror neurons have been discovered in the 90‘s (Rizzolatti & al.1996, Gallese & al.1996). The level of intersubjectivity that can be attributed to non human primates as related to mirror neurons is currently a subject of debate (Decety 2003). We consider that a limited intersubjectivity between pre-human primates made possible a merger of both types of representations. The fourth part proposes that such a merger of representations feeds the auto-representation with the meanings associated to the representations of conspecifics, namely the meanings associated to an entity perceived as existing in the environment. We propose that auto-representation carrying these new meanings makes up the first elements of self-representation. Intersubjectivity has allowed auto-representation to evolve into self-representation, avoiding the homunculus risk. The fifth part is a continuation to other presentations (Menant 2004, 2005) about possible evolution of self-representation into self-consciousness. We propose that identification with suffering or endangered conspecifics has increased anxiety, and that the tools used to limit this anxiety (development of empathy, imitation, language and group life) have provided a positive feedback on intersubjectivity and created an evolutionary engine for the organism. Other outcomes have also been possible. Such approach roots consciousness in emotions. The evolutionary scenario proposed here does not introduce explicitly the question of phenomenal consciousness (Block 1995). This question is to be addressed later with the help of this scenario.The conclusion lists the points introduced here with their possible continuations
    • …
    corecore