712 research outputs found

    The Catalog Problem:Deep Learning Methods for Transforming Sets into Sequences of Clusters

    Get PDF
    The titular Catalog Problem refers to predicting a varying number of ordered clusters from sets of any cardinality. This task arises in many diverse areas, ranging from medical triage, through multi-channel signal analysis for petroleum exploration to product catalog structure prediction. This thesis focuses on the latter, which exemplifies a number of challenges inherent to ordered clustering. These include learning variable cluster constraints, exhibiting relational reasoning and managing combinatorial complexity. All of which present unique challenges for neural networks, combining elements of set representation, neural clustering and permutation learning.In order to approach the Catalog Problem, a curated dataset of over ten thousand real-world product catalogs consisting of more than one million product offers is provided. Additionally, a library for generating simpler, synthetic catalog structures is presented. These and other datasets form the foundation of the included work, allowing for a quantitative comparison of the proposed methods’ ability to address the underlying challenge. In particular, synthetic datasets enable the assessment of the models’ capacity to learn higher order compositional and structural rules.Two novel neural methods are proposed to tackle the Catalog Problem, a set encoding module designed to enhance the network’s ability to condition the prediction on the entirety of the input set, and a larger architecture for inferring an input- dependent number of diverse, ordered partitional clusters with an added cardinality prediction module. Both result in an improved performance on the presented datasets, with the latter being the only neural method fulfilling all requirements inherent to addressing the Catalog Problem

    Generating Effective Sentence Representations: Deep Learning and Reinforcement Learning Approaches

    Get PDF
    Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Many Natural Language applications are powered by machine learning models performing a large variety of underlying tasks. Recently, deep learning approaches have obtained very high performance across many NLP tasks. In order to achieve this high level of performance, it is crucial for computers to have an appropriate representation of sentences. The tasks addressed in the thesis are best approached having shallow semantic representations. These representations are vectors that are then embedded in a semantic space. We present a variety of novel approaches in deep learning applied to NLP for generating effective sentence representations in this space. These semantic representations can either be general or task-specific. We focus on learning task-specific sentence representations, where often these tasks have a good amount of overlap. We design a set of general purpose and task specific sentence encoders combining both word-level semantic knowledge and word- and sentence-level syntactic information. As a method for the former, we perform an intelligent amalgamation of word vectors using modern deep learning modules. For the latter, we use word-level knowledge, such as parts of speech, spelling, and suffix features, and sentence-level information drawn from natural language parse trees which provide the hierarchical structure of a sentence together with grammatical relations between the words. Further expertise is added with reinforcement learning which guides a machine learning model through a reward-penalty game. Rather than just striving for good performance, we always try to design models that are more transparent and explainable. We provide an intuitive explanation about the design of each model and how the model is making a decision. Our extensive experiments show that these models achieve competitive performance compared with the currently available state-of-the-art generalized and task-specific sentence encoders. All but one of the tasks dealt with English language texts. The multilingual semantic similarity task required creating a multilingual corpus for which we provide a novel semi-supervised approach to make artificial negative samples in the presence of just positive samples

    AC-SUM-GAN: Connecting Actor-Critic and Generative Adversarial Networks for Unsupervised Video Summarization

    Get PDF
    This paper presents a new method for unsupervised video summarization. The proposed architecture embeds an Actor-Critic model into a Generative Adversarial Network and formulates the selection of important video fragments (that will be used to form the summary) as a sequence generation task. The Actor and the Critic take part in a game that incrementally leads to the selection of the video key-fragments, and their choices at each step of the game result in a set of rewards from the Discriminator. The designed training workflow allows the Actor and Critic to discover a space of actions and automatically learn a policy for key-fragment selection. Moreover, the introduced criterion for choosing the best model after the training ends, enables the automatic selection of proper values for parameters of the training process that are not learned from the data (such as the regularization factor σ). Experimental evaluation on two benchmark datasets (SumMe and TVSum) demonstrates that the proposed AC-SUM-GAN model performs consistently well and gives SoA results in comparison to unsupervised methods, that are also competitive with respect to supervised methods

    Integrating Deep Contextualized Word Embeddings into Text Summarization Systems

    Get PDF
    In questa tesi saranno usate tecniche di deep learning per affrontare unodei problemi più difficili dell’elaborazione automatica del linguaggio naturale:la generazione automatica di riassunti. Dato un corpus di testo, l’obiettivoè quello di generare un riassunto che sia in grado di distillare e comprimerel’informazione dall’intero testo di partenza. Con i primi approcci si é provatoa catturare il significato del testo attraverso l’uso di regole scritte dagliumani. Dopo questa era simbolica basata su regole, gli approcchi statistici hanno preso il sopravvento. Negli ultimi anni il deep learning ha impattato positivamente ogni area dell’elaborazione automatica del linguaggionaturale, incluso la generazione automatica dei riassunti. In questo lavoroi modelli pointer-generator [See et al., 2017] sono utilizzati in combinazionea pre-trained deep contextualized word embeddings [Peters et al., 2018]. Sivaluta l’approccio sui due più grossi dataset per la generazione automaticadei riassunti disponibili ora: il dataset CNN/Daily Mail e il dataset Newsroom. Il dataset CNN/Daily Mail è stato generato partendo dal dataset diQuestion Answering pubblicato da DeepMind [Hermann et al., 2015], concatenando le frasi di highlight delle news e formando cosı̀ dei riassunti multifrase. Il dataset Newsroom [Grusky et al., 2018] è, invece, il primo datasetesplicitamente costruito per la generazione automatica di riassunti. Comprende un milione di coppie articolo-riassunto con diversi gradi di estrattività/astrattività a diversi ratio di compressione.L’approccio è valutato sui test-set con l’uso della metrica Recall-Oriented Understudy for Gisting Evaluation (ROUGE). Questo approccio causa un sostanzioso aumento nelle performance per il dataset Newsroom raggiungendo lo stato dell’arte sul valore di ROUGE-1 e valori competitivi per ROUGE-2 e ROUGE-L
    • …
    corecore