80 research outputs found

    Harnessing heterogeneous association in real-world networks

    Get PDF
    Real-world networks often contain heterogeneity due to the heterogeneous nature of the world. A few examples of such networks include multi-view social networks, heterogeneous bibliographic networks, biomedical networks, etc. Ostensibly the heterogeneity of real-world network appears as the typed essence of nodes and edges. By considering type information, researchers have shown that using the typed networks can achieve performance better than using the homogeneous networks in a wide variety of downstream applications such as classification, clustering, recommendation, and outlier detection. Beyond the low-level heterogeneity in nodes and edges on the surface, their types also naturally induce higher-level typed network components. In my practice mining real-world networks, I identify that the heterogeneity also prevalently lies in the association across different network components, and such heterogeneous association is often important and intrinsic to the information embodied in the networks. In this dissertation, I investigate the necessity of modeling heterogeneous association in real-world networks and develop methodologies to simultaneously leverage the rich information and accommodate the incompatibility in the presence of heterogeneous association. A series of new models along this line are proposed for specific problems including learning network embedding, defining relevance measures, and discovering hypernymy relation, together with the discussion on how the principles reflected by these models can be used in other network mining tasks. These proposed models cannot only achieve better quantitative results but also uncover the semantics hidden in the heterogeneous association of real-world data

    Usage-Driven Unified Model for User Profile and Data Source Profile Extraction

    Get PDF
    This thesis addresses a problem related to usage analysis in information retrieval systems. Indeed, we exploit the history of search queries as support of analysis to extract a profile model. The objective is to characterize the user and the data source that interact in a system to allow different types of comparison (user-to-user, sourceto- source, user-to-source). According to the study we conducted on the work done on profile model, we concluded that the large majority of the contributions are strongly related to the applications within they are proposed. As a result, the proposed profile models are not reusable and suffer from several weaknesses. For instance, these models do not consider the data source, they lack of semantic mechanisms and they do not deal with scalability (in terms of complexity). Therefore, we propose a generic model of user and data source profiles. The characteristics of this model are the following. First, it is generic, being able to represent both the user and the data source. Second, it enables to construct the profiles in an implicit way based on histories of search queries. Third, it defines the profile as a set of topics of interest, each topic corresponding to a semantic cluster of keywords extracted by a specific clustering algorithm. Finally, the profile is represented according to the vector space model. The model is composed of several components organized in the form of a framework, in which we assessed the complexity of each component. The main components of the framework are: • a method for keyword queries disambiguation • a method for semantically representing search query logs in the form of a taxonomy; • a clustering algorithm that allows fast and efficient identification of topics of interest as semantic clusters of keywords; • a method to identify user and data source profiles according to the generic model. This framework enables in particular to perform various tasks related to usage-based structuration of a distributed environment. As an example of application, the framework is used to the discovery of user communities, and the categorization of data sources. To validate the proposed framework, we conduct a series of experiments on real logs from the search engine AOL search, which demonstrate the efficiency of the disambiguation method in short queries, and show the relation between the quality based clustering and the structure based clustering.Die Arbeit befasst sich mit der Nutzungsanalyse von Informationssuchsystemen. Auf Basis vergangener Anfragen sollen Nutzungsprofile ermittelt werden. Diese Profile charakterisieren die im Netz interagierenden Anwender und Datenquellen und ermöglichen somit Vergleiche von Anwendern, Anwendern und Datenquellen wie auch Vergleiche von Datenquellen. Die Arbeit am Profil-Modell und die damit verbundenen Studien zeigten, dass praktisch alle Beiträge stark auf die entsprechende Anwendung angepasst sind. Als Ergebnis sind die vorgeschlagenen Profil-Modelle nicht wiederverwendbar; darüber hinaus weisen sie mehrere Schwächen auf. Die Modelle sind zum Beispiel nicht für Datenquellen einsetzbar, Mechanismen für semantische Analysen sind nicht vorhanden oder sie verfügen übe keine adequate Skalierbarkeit (Komplexität). Um das Ziel von Nutzerprofilen zu erreichen wurde ein einheitliches Modell entwickelt. Dies ermöglicht die Modellierung von beiden Elementen: Nutzerprofilen und Datenquellen. Ein solches Nutzerprofil wird als Menge von Themenbereichen definiert, welche das Verhalten des Anwenders (Suchanfragen) beziehungsweise die Inhalte der Datenquelle charakterisieren. Das Modell ermöglicht die automatische Profilerstellung auf Basis der vergangenen Suchanfragen, welches unmittelbar zur Verfügung steht. Jeder Themenbereich korrespondiert einem Cluster von Schlüsselwörtern, die durch einen semantischen Clustering-Algorithmus extrahiert werden. Das Modell umfasst mehrere Komponenten, welche als Framework strukturiert sind. Die Komplexität jeder einzelner Komponente ist dabei festgehalten worden. Die wichtigsten Komponenten sind die Folgenden: • eine Methode zur Anfragen Begriffsklärung • eine Methode zur semantischen Darstellung der Logs als Taxonomie • einen Cluster-Algorithmus, der Themenbereiche (Anwender-Interessen, Datenquellen-Inhalte) über semantische Cluster der Schlüsselbegriffe identifiziert • eine Methode zur Berechnung des Nutzerprofils und des Profils der Datenquellen ausgehend von einem einheitlichen Modell Als Beispiel der vielfältigen Einsatzmöglichkeiten hinsichtlich Nutzerprofilen wurde das Framework abschließend auf zwei Beispiel-Szenarien angewendet: die Ermittlung von Anwender-Communities und die Kategorisierung von Datenquellen. Das Framework wurde durch Experimente validiert, welche auf Suchanfrage-Logs von AOL Search basieren. Die Effizienz der Verfahren wurde für kleine Anfragen demonstriert und zeigt die Beziehung zwischen dem Qualität-basiertem Clustering und dem Struktur-basiertem Clustering.La problématique traitée dans la thèse s’inscrit dans le cadre de l’analyse d’usage dans les systèmes de recherche d’information. En effet, nous nous intéressons à l’utilisateur à travers l’historique de ses requêtes, utilisées comme support d’analyse pour l’extraction d’un profil d’usage. L’objectif est de caractériser l’utilisateur et les sources de données qui interagissent dans un réseau afin de permettre des comparaisons utilisateur-utilisateur, source-source et source-utilisateur. Selon une étude que nous avons menée sur les travaux existants sur les modèles de profilage, nous avons conclu que la grande majorité des contributions sont fortement liés aux applications dans lesquelles ils étaient proposés. En conséquence, les modèles de profils proposés ne sont pas réutilisables et présentent plusieurs faiblesses. Par exemple, ces modèles ne tiennent pas compte de la source de données, ils ne sont pas dotés de mécanismes de traitement sémantique et ils ne tiennent pas compte du passage à l’échelle (en termes de complexité). C’est pourquoi, nous proposons dans cette thèse un modèle d’utilisateur et de source de données basé sur l’analyse d’usage. Les caractéristiques de ce modèle sont les suivantes. Premièrement, il est générique, permettant de représenter à la fois un utilisateur et une source de données. Deuxièmement, il permet de construire le profil de manière implicite à partir de l’historique de requêtes de recherche. Troisièmement, il définit le profil comme un ensemble de centres d’intérêts, chaque intérêt correspondant à un cluster sémantique de mots-clés déterminé par un algorithme de clustering spécifique. Et enfin, dans ce modèle le profil est représenté dans un espace vectoriel. Les différents composants du modèle sont organisés sous la forme d’un framework, la complexité de chaque composant y est evaluée. Le framework propose : • une methode pour la désambiguisation de requêtes ; • une méthode pour la représentation sémantique des logs sous la forme d’une taxonomie ; • un algorithme de clustering qui permet l’identification rapide et efficace des centres d’intérêt représentés par des clusters sémantiques de mots clés ; • une méthode pour le calcul du profil de l’utilisateur et du profil de la source de données à partir du modèle générique. Le framework proposé permet d’effectuer différentes tâches liées à la structuration d’un environnement distribué d’un point de vue usage. Comme exemples d’application, le framework est utilisé pour la découverte de communautés d’utilisateurs et la catégorisation de sources de données. Pour la validation du framework, une série d’expérimentations est menée en utilisant des logs du moteur de recherche AOL-search, qui ont démontrées l’efficacité de la désambiguisation sur des requêtes courtes, et qui ont permis d’identification de la relation entre le clustering basé sur une fonction de qualité et le clustering basé sur la structure

    Semantic vector representations of senses, concepts and entities and their applications in natural language processing

    Get PDF
    Representation learning lies at the core of Artificial Intelligence (AI) and Natural Language Processing (NLP). Most recent research has focused on develop representations at the word level. In particular, the representation of words in a vector space has been viewed as one of the most important successes of lexical semantics and NLP in recent years. The generalization power and flexibility of these representations have enabled their integration into a wide variety of text-based applications, where they have proved extremely beneficial. However, these representations are hampered by an important limitation, as they are unable to model different meanings of the same word. In order to deal with this issue, in this thesis we analyze and develop flexible semantic representations of meanings, i.e. senses, concepts and entities. This finer distinction enables us to model semantic information at a deeper level, which in turn is essential for dealing with ambiguity. In addition, we view these (vector) representations as a connecting bridge between lexical resources and textual data, encoding knowledge from both sources. We argue that these sense-level representations, similarly to the importance of word embeddings, constitute a first step for seamlessly integrating explicit knowledge into NLP applications, while focusing on the deeper sense level. Its use does not only aim at solving the inherent lexical ambiguity of language, but also represents a first step to the integration of background knowledge into NLP applications. Multilinguality is another key feature of these representations, as we explore the construction language-independent and multilingual techniques that can be applied to arbitrary languages, and also across languages. We propose simple unsupervised and supervised frameworks which make use of these vector representations for word sense disambiguation, a key application in natural language understanding, and other downstream applications such as text categorization and sentiment analysis. Given the nature of the vectors, we also investigate their effectiveness for improving and enriching knowledge bases, by reducing the sense granularity of their sense inventories and extending them with domain labels, hypernyms and collocations

    Harnessing sense-level information for semantically augmented knowledge extraction

    Get PDF
    Nowadays, building accurate computational models for the semantics of language lies at the very core of Natural Language Processing and Artificial Intelligence. A first and foremost step in this respect consists in moving from word-based to sense-based approaches, in which operating explicitly at the level of word senses enables a model to produce more accurate and unambiguous results. At the same time, word senses create a bridge towards structured lexico-semantic resources, where the vast amount of available machine-readable information can help overcome the shortage of annotated data in many languages and domains of knowledge. This latter phenomenon, known as the knowledge acquisition bottlneck, is a crucial problem that hampers the development of large-scale, data-driven approaches for many Natural Language Processing tasks, especially when lexical semantics is directly involved. One of these tasks is Information Extraction, where an effective model has to cope with data sparsity, as well as with lexical ambiguity that can arise at the level of both arguments and relational phrases. Even in more recent Information Extraction approaches where semantics is implicitly modeled, these issues have not yet been addressed in their entirety. On the other hand, however, having access to explicit sense-level information is a very demanding task on its own, which can rarely be performed with high accuracy on a large scale. With this in mind, in ths thesis we will tackle a two-fold objective: our first focus will be on studying fully automatic approaches to obtain high-quality sense-level information from textual corpora; then, we will investigate in depth where and how such sense-level information has the potential to enhance the extraction of knowledge from open text. In the first part of this work, we will explore three different disambiguation scenar- ios (semi-structured text, parallel text, and definitional text) and devise automatic disambiguation strategies that are not only capable of scaling to different corpus sizes and different languages, but that actually take advantage of a multilingual and/or heterogeneous setting to improve and refine their performance. As a result, we will obtain three sense-annotated resources that, when tested experimentally with a baseline system in a series of downstream semantic tasks (i.e. Word Sense Disam- biguation, Entity Linking, Semantic Similarity), show very competitive performances on standard benchmarks against both manual and semi-automatic competitors. In the second part we will instead focus on Information Extraction, with an emphasis on Open Information Extraction (OIE), where issues like sparsity and lexical ambiguity are especially critical, and study how to exploit at best sense-level information within the extraction process. We will start by showing that enforcing a deeper semantic analysis in a definitional setting enables a full-fledged extraction pipeline to compete with state-of-the-art approaches based on much larger (but noisier) data. We will then demonstrate how working at the sense level at the end of an extraction pipeline is also beneficial: indeed, by leveraging sense-based techniques, very heterogeneous OIE-derived data can be aligned semantically, and unified with respect to a common sense inventory. Finally, we will briefly shift the focus to the more constrained setting of hypernym discovery, and study a sense-aware supervised framework for the task that is robust and effective, even when trained on heterogeneous OIE-derived hypernymic knowledge

    Adjusting Sense Representations for Word Sense Disambiguation and Automatic Pun Interpretation

    Get PDF
    Word sense disambiguation (WSD)—the task of determining which meaning a word carries in a particular context—is a core research problem in computational linguistics. Though it has long been recognized that supervised (machine learning–based) approaches to WSD can yield impressive results, they require an amount of manually annotated training data that is often too expensive or impractical to obtain. This is a particular problem for under-resourced languages and domains, and is also a hurdle in well-resourced languages when processing the sort of lexical-semantic anomalies employed for deliberate effect in humour and wordplay. In contrast to supervised systems are knowledge-based techniques, which rely only on pre-existing lexical-semantic resources (LSRs). These techniques are of more general applicability but tend to suffer from lower performance due to the informational gap between the target word's context and the sense descriptions provided by the LSR. This dissertation is concerned with extending the efficacy and applicability of knowledge-based word sense disambiguation. First, we investigate two approaches for bridging the information gap and thereby improving the performance of knowledge-based WSD. In the first approach we supplement the word's context and the LSR's sense descriptions with entries from a distributional thesaurus. The second approach enriches an LSR's sense information by aligning it to other, complementary LSRs. Our next main contribution is to adapt techniques from word sense disambiguation to a novel task: the interpretation of puns. Traditional NLP applications, including WSD, usually treat the source text as carrying a single meaning, and therefore cannot cope with the intentionally ambiguous constructions found in humour and wordplay. We describe how algorithms and evaluation methodologies from traditional word sense disambiguation can be adapted for the "disambiguation" of puns, or rather for the identification of their double meanings. Finally, we cover the design and construction of technological and linguistic resources aimed at supporting the research and application of word sense disambiguation. Development and comparison of WSD systems has long been hampered by a lack of standardized data formats, language resources, software components, and workflows. To address this issue, we designed and implemented a modular, extensible framework for WSD. It implements, encapsulates, and aggregates reusable, interoperable components using UIMA, an industry-standard information processing architecture. We have also produced two large sense-annotated data sets for under-resourced languages or domains: one of these targets German-language text, and the other English-language puns

    Information fusion for automated question answering

    Get PDF
    Until recently, research efforts in automated Question Answering (QA) have mainly focused on getting a good understanding of questions to retrieve correct answers. This includes deep parsing, lookups in ontologies, question typing and machine learning of answer patterns appropriate to question forms. In contrast, I have focused on the analysis of the relationships between answer candidates as provided in open domain QA on multiple documents. I argue that such candidates have intrinsic properties, partly regardless of the question, and those properties can be exploited to provide better quality and more user-oriented answers in QA.Information fusion refers to the technique of merging pieces of information from different sources. In QA over free text, it is motivated by the frequency with which different answer candidates are found in different locations, leading to a multiplicity of answers. The reason for such multiplicity is, in part, the massive amount of data used for answering, and also its unstructured and heterogeneous content: Besides am¬ biguities in user questions leading to heterogeneity in extractions, systems have to deal with redundancy, granularity and possible contradictory information. Hence the need for answer candidate comparison. While frequency has proved to be a significant char¬ acteristic of a correct answer, I evaluate the value of other relationships characterizing answer variability and redundancy.Partially inspired by recent developments in multi-document summarization, I re¬ define the concept of "answer" within an engineering approach to QA based on the Model-View-Controller (MVC) pattern of user interface design. An "answer model" is a directed graph in which nodes correspond to entities projected from extractions and edges convey relationships between such nodes. The graph represents the fusion of information contained in the set of extractions. Different views of the answer model can be produced, capturing the fact that the same answer can be expressed and pre¬ sented in various ways: picture, video, sound, written or spoken language, or a formal data structure. Within this framework, an answer is a structured object contained in the model and retrieved by a strategy to build a particular view depending on the end user (or taskj's requirements.I describe shallow techniques to compare entities and enrich the model by discovering four broad categories of relationships between entities in the model: equivalence, inclusion, aggregation and alternative. Quantitatively, answer candidate modeling im¬ proves answer extraction accuracy. It also proves to be more robust to incorrect answer candidates than traditional techniques. Qualitatively, models provide meta-information encoded by relationships that allow shallow reasoning to help organize and generate the final output

    Semantic Keyword-based Search on Heterogeneous Information Systems

    Get PDF
    En los últimos años, con la difusión y el uso de Internet, el volumen de información disponible para los usuarios ha crecido exponencialmente. Además, la posibilidad de acceder a dicha información se ha visto impulsada por los niveles de conectividad de los que disfrutamos actualmente gracias al uso de los móviles de nueva generación y las redes inalámbricas (e.g., 3G, Wi-Fi). Sin embargo, con los métodos de acceso actuales, este exceso de información es tan perjudicial como la falta de la misma, ya que el usuario no tiene tiempo de procesarla en su totalidad. Por otro lado, esta información está detrás de sistemas de información de naturaleza muy heterogénea (e.g., buscadores Web, fuentes de Linked Data, etc.), y el usuario tiene que conocerlos para poder explotar al máximo sus capacidades. Esta diversidad se hace más patente si consideramos cualquier servicio de información como potencial fuente de información para el usuario (e.g., servicios basados en la localización, bases de datos exportadas mediante Servicios Web, etc.). Dado este nivel de heterogeneidad, la integración de estos sistemas se debe hacer externamente, ocultando su complejidad al usuario y dotándole de mecanismos para que pueda expresar sus consultas de forma sencilla. En este sentido, el uso de interfaces basados en palabras clave (keywords) se ha popularizado gracias a su sencillez y a su adopción por parte de los buscadores Web más usados. Sin embargo, esa sencillez que es su mayor virtud también es su mayor defecto, ya que genera problemas de ambigüedad en las consultas. Las consultas expresadas como conjuntos de palabras clave son inherentemente ambiguas al ser una proyección de la verdadera pregunta que el usuario quiere hacer. En la presente tesis, abordamos el problema de integrar sistemas de información heterogéneos bajo una búsqueda guiada por la semántica de las palabras clave; y presentamos QueryGen, un prototipo de nuestra solución. En esta búsqueda semántica abogamos por establecer la consulta que el usuario tenía en mente cuando escribió sus palabras clave, en un lenguaje de consulta formal para evitar posibles ambigüedades. La integración de los sistemas subyacentes se realiza a través de la definición de sus lenguajes de consulta y de sus modelos de ejecución. En particular, nuestro sistema: - Descubre el significado de las palabras clave consultando un conjunto dinámico de ontologías, y desambigua dichas palabras teniendo en cuenta su contexto (el resto de palabras clave), ya que cada una de las palabras tiene influencia sobre el significado del resto de la entrada. Durante este proceso, los significados que son suficientemente similares son fusionados y el sistema propone aquellos más probables dada la entrada del usuario. La información semántica obtenida en el proceso es integrada y utilizada en fases posteriores para obtener la correcta interpretación del conjunto de palabras clave. - Un mismo conjunto de palabras pueden representar diversas consultas aún cuando se conoce su significado individual. Por ello, una vez establecidos los significados de cada palabra y para obtener la consulta exacta del usuario, nuestro sistema encuentra todas las preguntas posibles utilizando las palabras clave. Esta traducción de palabras clave a preguntas se realiza empleando lenguajes de consulta formales para evitar las posibles ambigüedades y expresar la consulta de manera precisa. Nuestro sistema evita la generación de preguntas semánticamente incorrectas o duplicadas con la ayuda de un razonador basado en Lógicas Descriptivas (Description Logics). En este proceso, nuestro sistema es capaz de reaccionar ante entradas insuficientes (e.g., palabras omitidas) mediante la adición de términos virtuales, que representan internamente palabras que el usuario tenía en mente pero omitió cuando escribió su consulta. - Por último, tras la validación por parte del usuario de su consulta, nuestro sistema accede a los sistemas de información registrados que pueden responderla y recupera la respuesta de acuerdo a la semántica de la consulta. Para ello, nuestro sistema implementa una arquitectura modular permite añadir nuevos sistemas al vuelo siempre que se proporcione su especificación (lenguajes de consulta soportados, modelos y formatos de datos, etc.). Por otro lado, el trabajar con sistemas de información heterogéneos, en particular sistemas relacionados con la Computación Móvil, ha permitido que las contribuciones de esta tesis no se limiten al campo de la búsqueda semántica. A este respecto, se ha estudiado el ámbito de la semántica de las consultas basadas en la localización, y especialmente, la influencia de la semántica de las localizaciones en el procesado e interpretación de las mismas. En particular, se proponen dos modelos ontológicos para modelar y capturar la relaciones semánticas de las localizaciones y ampliar la expresividad de las consultas basadas en la localización. Durante el desarrollo de esta tesis, situada entre el ámbito de la Web Semántica y el de la Computación Móvil, se ha abierto una nueva línea de investigación acerca del modelado de conocimiento volátil, y se ha estudiado la posibilidad de utilizar razonadores basados en Lógicas Descriptivas en dispositivos basados en Android. Por último, nuestro trabajo en el ámbito de las búsquedas semánticas a partir de palabras clave ha sido extendido al ámbito de los agentes conversacionales, haciéndoles capaces de explotar distintas fuentes de datos semánticos actualmente disponibles bajo los principios del Linked Data
    • …
    corecore