2,265 research outputs found
Semantic bottleneck for computer vision tasks
This paper introduces a novel method for the representation of images that is
semantic by nature, addressing the question of computation intelligibility in
computer vision tasks. More specifically, our proposition is to introduce what
we call a semantic bottleneck in the processing pipeline, which is a crossing
point in which the representation of the image is entirely expressed with
natural language , while retaining the efficiency of numerical representations.
We show that our approach is able to generate semantic representations that
give state-of-the-art results on semantic content-based image retrieval and
also perform very well on image classification tasks. Intelligibility is
evaluated through user centered experiments for failure detection
Recommended from our members
Exponential Family Embeddings
Word embeddings are a powerful approach for capturing semantic similarity among terms in a vocabulary. Exponential family embeddings extend the idea of word embeddings to other types of high-dimensional data. Exponential family embeddings have three ingredients; embeddings as latent variables, a predefined conditioning set for each observation called the context and a conditional likelihood from the exponential family. The embeddings are inferred with a scalable algorithm. This thesis highlights three advantages of the exponential family embeddings model class: (A) The approximations used for existing methods such as word2vec can be understood as a biased stochastic gradients procedure on a specific type of exponential family embedding model --- the Bernoulli embedding. (B) By choosing different likelihoods from the exponential family we can generalize the task of learning distributed representations to different application domains. For example, we can learn embeddings of grocery items from shopping data, embeddings of movies from click data, or embeddings of neurons from recordings of zebrafish brains. On all three applications, we find exponential family embedding models to be more effective than other types of dimensionality reduction. They better reconstruct held-out data and find interesting qualitative structure. (C) Finally, the probabilistic modeling perspective allows us to incorporate structure and domain knowledge in the embedding space. We develop models for studying how language varies over time, differs between related groups of data, and how word usage differs between languages. Key to the success of these methods is that the embeddings share statistical information through hierarchical priors or neural networks. We demonstrate the benefits of this approach in empirical studies of Senate speeches, scientific abstracts, and shopping baskets
An Emergent Approach to Text Analysis Based on a Connectionist Model and the Web
In this paper, we present a method to provide proactive assistance in text checking, based on usage relationships between words structuralized on the Web. For a given sentence, the method builds a connectionist structure of relationships between word n-grams. Such structure is then parameterized by means of an unsupervised and language agnostic optimization process. Finally, the method provides a representation of the sentence that allows emerging the least prominent usage-based relational patterns, helping to easily find badly-written and unpopular text. The study includes the problem statement and its characterization in the literature, as well as the proposed solving approach and some experimental use
Investigations into the value of labeled and unlabeled data in biomedical entity recognition and word sense disambiguation
Human annotations, especially in highly technical domains, are expensive and time consuming togather, and can also be erroneous. As a result, we never have sufficiently accurate data to train andevaluate supervised methods. In this thesis, we address this problem by taking a semi-supervised approach to biomedical namedentity recognition (NER), and by proposing an inventory-independent evaluation framework for supervised and unsupervised word sense disambiguation. Our contributions are as follows: We introduce a novel graph-based semi-supervised approach to named entity recognition(NER) and exploit pre-trained contextualized word embeddings in several biomedical NER tasks. We propose a new evaluation framework for word sense disambiguation that permits a fair comparison between supervised methods trained on different sense inventories as well as unsupervised methods without a fixed sense inventory
Towards A Practically Useful Text Simplification System
While there is a vast amount of text written about nearly any topic, this is often difficult for someone unfamiliar with a specific field to understand. Automated text simplification aims to reduce the complexity of a document, making it more comprehensible to a broader audience. Much of the research in this field has traditionally focused on simplification sub-tasks, such as lexical, syntactic, or sentence-level simplification. However, current systems struggle to consistently produce high-quality simplifications. Phrase-based models tend to make too many poor transformations; on the other hand, recent neural models, while producing grammatical output, often do not make all needed changes to the original text. In this thesis, I discuss novel approaches for improving lexical and sentence-level simplification systems. Regarding sentence simplification models, after noting that encouraging diversity at inference time leads to significant improvements, I take a closer look at the idea of diversity and perform an exhaustive comparison of diverse decoding techniques on other generation tasks. I also discuss the limitations in the framing of current simplification tasks, which prevent these models from yet being practically useful. Thus, I also propose a retrieval-based reformulation of the problem. Specifically, starting with a document, I identify concepts critical to understanding its content, and then retrieve documents relevant for each concept, re-ranking them based on the desired complexity level
Unsupervised Methods for Learning and Using Semantics of Natural Language
Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure — e.g. grammar, semantics or syntax — from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources.
In this thesis, we present so-called unsupervised methods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data.
In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis.
Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be
operationalized for computers. Based on this operationalization, we introduce a formalism for representing words and their context. This formalism is used in the following chapters in order to compute similarities between words.
In Chapter 3 we give a brief description of methods in the field of computational semantics, which are targeted to compute similarities between words. All these methods have in common that they extract a contextual representation for a word that is generated from text. Then, this representation is used to compute similarities between words. In addition, we also present examples of the word similarities that are computed with these methods.
Segmenting text into its topically related units is intuitively performed by humans and helps to extract connections between words in text. We equip the computer with these abilities by introducing a text segmentation algorithm in Chapter 4. This algorithm is based on a statistical topic model, which learns to cluster words into topics solely on the basis of the text. Using the segmentation algorithm, we demonstrate the influence of the parameters provided by the topic model. In addition, our method yields state-of-the-art performances on two datasets.
In order to represent the meaning of words, we use context information (e.g. neighboring words), which is utilized to compute similarities. Whereas we described methods for word similarity computations in Chapter 3, we introduce a generic symbolic framework in Chapter 5. As we follow a symbolic approach, we do not represent words using dense numeric vectors but we use symbols (e.g. neighboring words or syntactic dependency parses) directly. Such a representation is readable for humans and is preferred in sensitive applications like the medical domain, where the reason for decisions needs to be provided. This framework enables the processing of arbitrarily large data. Furthermore, it is able to compute the most similar words for all words within a text collection resulting in a distributional thesaurus. We show the influence of various parameters deployed in our framework and examine the impact of different corpora used for computing similarities. Performing computations based on various contextual representations, we obtain the best results when using syntactic dependencies between words within sentences. However, these syntactic dependencies are predicted using a supervised dependency parser, which is trained on language-dependent and human-annotated resources.
To avoid such language-specific preprocessing for computing distributional thesauri, we investigate the replacement of language-dependent dependency parsers by language-independent unsupervised parsers in Chapter 6. Evaluating the syntactic dependencies from unsupervised and supervised parses against human-annotated resources reveals that the unsupervised methods are not capable to compete with the supervised ones. In this chapter we use the predicted structure of both types of parses as context representation in order to compute word similarities. Then, we evaluate the quality of the similarities, which provides an extrinsic evaluation setup for both unsupervised and supervised dependency parsers. In an evaluation on English text, similarities computed based on contextual representations generated with unsupervised parsers do not outperform the similarities computed with the context representation extracted from supervised parsers. However, we observe the best results when applying context retrieved by the unsupervised parser for computing distributional thesauri on German language. Furthermore, we demonstrate that our framework is capable to combine different context representations, as we obtain the best performance with a combination of both flavors of syntactic dependencies for both languages.
Most languages are not composed of single-worded terms only, but also contain many multi-worded terms that form a unit, called multiword expressions. The identification of multiword expressions is particularly important for semantics, as e.g. the term New York has a different meaning than its single terms New or York. Whereas most research on semantics avoids handling these expressions, we target on the extraction of multiword expressions in Chapter 7. Most previously introduced methods rely on part-of-speech tags and apply a ranking function to rank term sequences according to their multiwordness. Here, we introduce a language-independent and knowledge-free ranking method that uses information from distributional thesauri. Performing evaluations on English and French textual data, our method achieves the best results in comparison to methods from the literature.
In Chapter 8 we apply information from distributional thesauri as features for various applications. First, we introduce a general setting for tackling the out-of-vocabulary problem. This problem describes the inferior performance of supervised methods according to words that are not contained in the training data. We alleviate this issue by replacing these unseen words with the most similar ones that are known, extracted from a distributional thesaurus. Using a supervised part-of-speech tagging method, we show substantial improvements in the classification performance for out-of-vocabulary words based on German and English textual data. The second application introduces a system for replacing words within a sentence with a word of the same meaning. For this application, the information from a distributional thesaurus provides the highest-scoring features. In the last application, we introduce an algorithm that is capable to detect the different meanings of a word and groups them into coarse-grained categories, called supersenses. Generating features by means of supersenses and distributional thesauri yields an performance increase when plugged into a supervised system that recognized named entities (e.g. names, organizations or locations).
Further directions for using distributional thesauri are presented in Chapter 9. First, we lay out a method, which is capable of incorporating background information (e.g. source of the text collection or sense information) into a distributional thesaurus. Furthermore, we describe an approach on building thesauri for different text domains (e.g. medical or finance domain) and how they can be combined to have a high coverage of domain-specific knowledge as well as a broad background for the open domain. In the last section we characterize yet another method, suited to enrich existing knowledge bases. All three directions might be further extensions, which induce further structure based on textual data.
The last chapter gives a summary of this work: we demonstrate that without language-dependent knowledge, a computer can learn to extract useful structure from text by using computational semantics. Due to the unsupervised nature of the introduced methods, we are able to extract new structure from raw textual data. This is important especially for languages, for which less manually created resources are available as well as for special domains e.g. medical or finance. We have demonstrated that our methods achieve state-of-the-art performance. Furthermore, we have proven their impact by applying the extracted structure in three natural language processing tasks. We have also applied the methods to different languages and large amounts of data. Thus, we have not proposed methods, which are suited for extracting structure for a single language, but methods that are capable to explore structure for “language” in general
Lexical simplification for the systematic support of cognitive accessibility guidelines
The Internet has come a long way in recent years, contributing to the proliferation of
large volumes of digitally available information. Through user interfaces we can access
these contents, however, they are not accessible to everyone. The main users affected are
people with disabilities, who are already a considerable number, but accessibility barriers
affect a wide range of user groups and contexts of use in accessing digital information.
Some of these barriers are caused by language inaccessibility when texts contain long
sentences, unusual words and complex linguistic structures. These accessibility barriers
directly affect people with cognitive disabilities.
For the purpose of making textual content more accessible, there are initiatives such
as the Easy Reading guidelines, the Plain Language guidelines and some of the languagespecific
Web Content Accessibility Guidelines (WCAG). These guidelines provide documentation,
but do not specify methods for meeting the requirements implicit in these
guidelines in a systematic way. To obtain a solution, methods from the Natural Language
Processing (NLP) discipline can provide support for achieving compliance with the cognitive
accessibility guidelines for the language.
The task of text simplification aims at reducing the linguistic complexity of a text from
a syntactic and lexical perspective, the latter being the main focus of this Thesis. In this
sense, one solution space is to identify in a text which words are complex or uncommon,
and in the case that there were, to provide a more usual and simpler synonym, together
with a simple definition, all oriented to people with cognitive disabilities.
With this goal in mind, this Thesis presents the study, analysis, design and development
of an architecture, NLP methods, resources and tools for the lexical simplification of
texts for the Spanish language in a generic domain in the field of cognitive accessibility.
To achieve this, each of the steps present in the lexical simplification processes is studied,
together with methods for word sense disambiguation. As a contribution, different
types of word embedding are explored and created, supported by traditional and dynamic
embedding methods, such as transfer learning methods. In addition, since most of the
NLP methods require data for their operation, a resource in the framework of cognitive
accessibility is presented as a contribution.Internet ha avanzado mucho en los últimos años contribuyendo a la proliferación de
grandes volúmenes de información disponible digitalmente. A través de interfaces de
usuario podemos acceder a estos contenidos, sin embargo, estos no son accesibles a todas
las personas. Los usuarios afectados principalmente son las personas con discapacidad
siendo ya un número considerable, pero las barreras de accesibilidad afectan a un gran
rango de grupos de usuarios y contextos de uso en el acceso a la información digital. Algunas
de estas barreras son causadas por la inaccesibilidad al lenguaje cuando los textos
contienen oraciones largas, palabras inusuales y estructuras lingüísticas complejas. Estas
barreras de accesibilidad afectan directamente a las personas con discapacidad cognitiva.
Con el fin de hacer el contenido textual más accesible, existen iniciativas como las
pautas de Lectura Fácil, las pautas de Lenguaje Claro y algunas de las pautas de Accesibilidad
al Contenido en la Web (WCAG) específicas para el lenguaje. Estas pautas
proporcionan documentación, pero no especifican métodos para cumplir con los requisitos
implícitos en estas pautas de manera sistemática. Para obtener una solución, los
métodos de la disciplina del Procesamiento del Lenguaje Natural (PLN) pueden dar un
soporte para alcanzar la conformidad con las pautas de accesibilidad cognitiva relativas al
lenguaje
La tarea de la simplificación de textos del PLN tiene como objetivo reducir la complejidad
lingüística de un texto desde una perspectiva sintáctica y léxica, siendo esta última
el enfoque principal de esta Tesis. En este sentido, un espacio de solución es identificar
en un texto qué palabras son complejas o poco comunes, y en el caso de que sí hubiera,
proporcionar un sinónimo más usual y sencillo, junto con una definición sencilla, todo
ello orientado a las personas con discapacidad cognitiva.
Con tal meta, en esta Tesis, se presenta el estudio, análisis, diseño y desarrollo de
una arquitectura, métodos PLN, recursos y herramientas para la simplificación léxica de
textos para el idioma español en un dominio genérico en el ámbito de la accesibilidad
cognitiva. Para lograr esto, se estudia cada uno de los pasos presentes en los procesos
de simplificación léxica, junto con métodos para la desambiguación del sentido de las
palabras. Como contribución, diferentes tipos de word embedding son explorados y creados,
apoyados por métodos embedding tradicionales y dinámicos, como son los métodos
de transfer learning. Además, debido a que gran parte de los métodos PLN requieren
datos para su funcionamiento, se presenta como contribución un recurso en el marco de
la accesibilidad cognitiva.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Antonio Macías Iglesias.- Secretario: Israel González Carrasco.- Vocal: Raquel Hervás Ballestero
- …