13 research outputs found

    Code-switching in Irish tweets: a preliminary analysis

    Get PDF
    As is the case with many languages, research into code-switching in Modern Irish has, until recently, mainly been focused on the spoken language. Online usergenerated content (UGC) is less restrictive than traditional written text, allowing for code-switching, and as such, provides a new platform for text-based research in this field of study. This paper reports on the annotation of (English) code-switching in a corpus of 1496 Irish tweets and provides a computational analysis of the nature of code-switching amongst Irish speaking Twitter users, with a view to providing a basis for future linguistic and socio-linguistic studies

    Unsupervised coreference resolution by utilizing the most informative relations

    Get PDF
    In this paper we present a novel method for unsupervised coreference resolution. We introduce a precision-oriented inference method that scores a candidate entity of a mention based on the most informative mention pair relation between the given mention entity pair. We introduce an informativeness score for determining the most precise relation of a mention entity pair regarding the coreference decisions. The informativeness score is learned robustly during few iterations of the expectation maximization algorithm. The proposed unsupervised system outperforms existing unsupervised methods on all benchmark data sets

    Natural language software registry (second edition)

    Get PDF

    Barry Smith an sich

    Get PDF
    Festschrift in Honor of Barry Smith on the occasion of his 65th Birthday. Published as issue 4:4 of the journal Cosmos + Taxis: Studies in Emergent Order and Organization. Includes contributions by Wolfgang Grassl, Nicola Guarino, John T. Kearns, Rudolf Lüthe, Luc Schneider, Peter Simons, Wojciech Żełaniec, and Jan Woleński

    Learning of a multilingual bitaxonomy of Wikipedia and its application to semantic predicates

    Get PDF
    The ability to extract hypernymy information on a large scale is becoming increasingly important in natural language processing, an area of the artificial intelligence which deals with the processing and understanding of natural language. While initial studies extracted this type of information from textual corpora by means of lexico-syntactic patterns, over time researchers moved to alternative, more structured sources of knowledge, such as Wikipedia. After the first attempts to extract is-a information fromWikipedia categories, a full line of research gave birth to numerous knowledge bases containing information which, however, is either incomplete or irremediably bound to English. To this end we put forward MultiWiBi, the first approach to the construction of a multilingual bitaxonomy which exploits the inner connection between Wikipedia pages and Wikipedia categories to induce a wide-coverage and fine-grained integrated taxonomy. A series of experiments show state-of-the-art results against all the available taxonomic resources available in the literature, also with respect to two novel measures of comparison. Another dimension where existing resources usually fall short is their degree of multilingualism. While knowledge is typically language agnostic, currently resources are able to extract relevant information only in languages providing highquality tools. In contrast, MultiWiBi does not leave any language behind: we show how to taxonomize Wikipedia in an arbitrary language and in a way that is fully independent of additional resources. At the core of our approach lies, in fact, the idea that the English version of Wikipedia can be linguistically exploited as a pivot to project the taxonomic information extracted from English to any other Wikipedia language in order to have a bitaxonomy in a second, arbitrary language; as a result, not only concepts which have an English equivalent are covered, but also those concepts which are not lexicalized in the source language. We also present the impact of having the taxonomized encyclopedic knowledge offered by MultiWiBi embedded into a semantic model of predicates (SPred) which crucially leverages Wikipedia to generalize collections of related noun phrases to infer a probability distribution over expected semantic classes. We applied SPred to a word sense disambiguation task and show that, when MultiWiBi is plugged in to replace an internal component, SPred’s generalization power increases as well as its precision and recall. Finally, we also published MultiWiBi as linked data, a paradigm which fosters interoperability and interconnection among resources and tools through the publication of data on the Web, and developed a public interface which lets the users navigate through MultiWiBi’s taxonomic structure in a graphical, captivating manner

    Digital History and Hermeneutics

    Get PDF
    For doing history in the digital age, we need to investigate the “digital kitchen” as the place where the “raw” is transformed into the “cooked”. The novel field of digital hermeneutics provides a critical and reflexive frame for digital humanities research by acquiring digital literacy and skills. The Doctoral Training Unit "Digital History and Hermeneutics" is applying this new digital practice by reflecting on digital tools and methods

    Deep interactive text prediction and quality estimation in translation interfaces

    Get PDF
    The output of automatic translation systems is usually destined for human consumption. In most cases, translators use machine translation (MT) as the first step in the process of creating a fluent translation in a target language given a text in a source language. However, there are many possible ways for translators to interact with MT. The goal of this thesis is to investigate new interactive designs and interfaces for translation. In the first part of the thesis, we present pilot studies which investigate aspects of the interactive translation process, building upon insights from Human-Computer Interaction (HCI) and Translation Studies. We developed HandyCAT, an open-source platform for translation process research, which was used to conduct two user studies: an investigation into interactive machine translation and evaluation of a novel component for post-editing. We then propose new models for quality estimation (QE) of MT, and new models for es- timating the confidence of prefix-based neural interactive MT (IMT) systems. We present a series of experiments using neural sequence models for QE and IMT. We focus upon token-level QE models, which can be used as standalone components or integrated into post-editing pipelines, guiding users in selecting phrases to edit. We introduce a strong recurrent baseline for neural QE, and show how state of the art automatic post-editing (APE) models can be re-purposed for word-level QE. We also propose an auxiliary con- fidence model, which can be attached to (I)-MT systems to use the model’s internal state to estimate confidence about the model’s predictions. The third part of the thesis introduces lexically constrained decoding using grid beam search (GBS), a means of expanding prefix-based interactive translation to general lexical constraints. By integrating lexically constrained decoding with word-level QE, we then suggest a novel interactive design for translation interfaces, and test our hypotheses using simulated editing. The final section focuses upon designing an interface for interactive post-editing, incorporating both GBS and QE. We design components which introduce a new way of interacting with translation models, and test these components in a user-study

    Semantic analysis for improved multi-document summarization of text

    Get PDF
    Excess amount of unstructured data is easily accessible in digital format. This information overload places too heavy a burden on society for its analysis and execution needs. Focused (i.e. topic, query, question, category, etc.) multi-document summarization is an information reduction solution which has reached a state-of-the-art that now demands the need to further explore other techniques to model human summarization activity. Such techniques have been mainly extractive and rely on distribution and complex machine learning on corpora in order to perform closely to human summaries. Overall, these techniques are still being used, and the field now needs to move toward more abstractive approaches to model human way of summarizing. A simple, inexpensive and domain-independent system architecture is created for adding semantic analysis to the summarization process. The proposed system is novel in its use of a new semantic analysis metric to better score sentences for selection into a summary. It also simplifies semantic processing of sentences to better capture more likely semantic-related information, reduce redundancy and reduce complexity. The system is evaluated against participants in the Document Understanding Conference and the later Text Analysis Conference using the performance ROUGE measures of n-gram recall between automated systems, human and baseline gold standard baseline summaries. The goal was to show that semantic analysis used for summarization can perform well, while remaining simple and inexpensive without significant loss of recall as compared to the foundational baseline system. Current results show improvement over the gold standard baseline when all factors of this work's semantic analysis technique are used in combination. These factors are the semantic cue words feature and semantic class weighting to determine sentences with important information. Also, the semantic triples clustering used to decompose natural language sentences to their most basic meaning and select the most important sentences added to this improvement. In competition against the gold standard baseline system on the standardized summarization evaluation metric ROUGE, this work outperforms the baseline system by more than ten position rankings. This work shows that semantic analysis and light-weight, open-domain techniques have potential.Ph.D., Information Studies -- Drexel University, 201

    Leveraging Semantic Annotations for Event-focused Search & Summarization

    Get PDF
    Today in this Big Data era, overwhelming amounts of textual information across different sources with a high degree of redundancy has made it hard for a consumer to retrospect on past events. A plausible solution is to link semantically similar information contained across the different sources to enforce a structure thereby providing multiple access paths to relevant information. Keeping this larger goal in view, this work uses Wikipedia and online news articles as two prominent yet disparate information sources to address the following three problems: • We address a linking problem to connect Wikipedia excerpts to news articles by casting it into an IR task. Our novel approach integrates time, geolocations, and entities with text to identify relevant documents that can be linked to a given excerpt. • We address an unsupervised extractive multi-document summarization task to generate a fixed-length event digest that facilitates efficient consumption of information contained within a large set of documents. Our novel approach proposes an ILP for global inference across text, time, geolocations, and entities associated with the event. • To estimate temporal focus of short event descriptions, we present a semi-supervised approach that leverages redundancy within a longitudinal news collection to estimate accurate probabilistic time models. Extensive experimental evaluations demonstrate the effectiveness and viability of our proposed approaches towards achieving the larger goal.Im heutigen Big Data Zeitalters existieren überwältigende Mengen an Textinformationen, die über mehrere Quellen verteilt sind und ein hohes Maß an Redundanz haben. Durch diese Gegebenheiten ist eine Retroperspektive auf vergangene Ereignisse für Konsumenten nur schwer möglich. Eine plausible Lösung ist die Verknüpfung semantisch ähnlicher, aber über mehrere Quellen verteilter Informationen, um dadurch eine Struktur zu erzwingen, die mehrere Zugriffspfade auf relevante Informationen, bietet. Vor diesem Hintergrund benutzt diese Dissertation Wikipedia und Onlinenachrichten als zwei prominente, aber dennoch grundverschiedene Informationsquellen, um die folgenden drei Probleme anzusprechen: • Wir adressieren ein Verknüpfungsproblem, um Wikipedia-Auszüge mit Nachrichtenartikeln zu verbinden und das Problem in eine Information-Retrieval-Aufgabe umzuwandeln. Unser neuartiger Ansatz integriert Zeit- und Geobezüge sowie Entitäten mit Text, um relevante Dokumente, die mit einem gegebenen Auszug verknüpft werden können, zu identifizieren. • Wir befassen uns mit einer unüberwachten Extraktionsmethode zur automatischen Zusammenfassung von Texten aus mehreren Dokumenten um Ereigniszusammenfassungen mit fester Länge zu generieren, was eine effiziente Aufnahme von Informationen aus großen Dokumentenmassen ermöglicht. Unser neuartiger Ansatz schlägt eine ganzzahlige lineare Optimierungslösung vor, die globale Inferenzen über Text, Zeit, Geolokationen und mit Ereignis-verbundenen Entitäten zieht. • Um den zeitlichen Fokus kurzer Ereignisbeschreibungen abzuschätzen, stellen wir einen semi-überwachten Ansatz vor, der die Redundanz innerhalb einer langzeitigen Dokumentensammlung ausnutzt, um genaue probabilistische Zeitmodelle abzuschätzen. Umfangreiche experimentelle Auswertungen zeigen die Wirksamkeit und Tragfähigkeit unserer vorgeschlagenen Ansätze zur Erreichung des größeren Ziels

    Enhancing Geospatial Data: Collecting and Visualising User-Generated Content Through Custom Toolkits and Cloud Computing Workflows

    Get PDF
    Through this thesis we set the hypothesis that, via the creation of a set of custom toolkits, using cloud computing, online user-generated content, can be extracted from emerging large-scale data sets, allowing the collection, analysis and visualisation of geospatial data by social scientists. By the use of a custom-built suite of software, known as the ‘BigDataToolkit’, we examine the need and use of cloud computing and custom workflows to open up access to existing online data as well as setting up processes to enable the collection of new data. We examine the use of the toolkit to collect large amounts of data from various online sources, such as Social Media Application Programming Interfaces (APIs) and data stores, to visualise the data collected in real-time. Through the execution of these workflows, this thesis presents an implementation of a smart collector framework to automate the collection process to significantly increase the amount of data that can be obtained from the standard API endpoints. By the use of these interconnected methods and distributed collection workflows, the final system is able to collect and visualise a larger amount of data in real time than single system data collection processes used within traditional social media analysis. Aimed at allowing researchers without a core understanding of the intricacies of computer science, this thesis provides a methodology to open up new data sources to not only academics but also wider participants, allowing the collection of user-generated geographic and textual content, en masse. A series of case studies are provided, covering applications from the single researcher collecting data through to collection via the use of televised media. These are examined in terms of the tools created and the opportunities opened, allowing real-time analysis of data, collected via the use of the developed toolkit
    corecore