14 research outputs found

    A semantic metadata enrichment software ecosystem (SMESE) : its prototypes for digital libraries, metadata enrichments and assisted literature reviews

    Get PDF
    Contribution 1: Initial design of a semantic metadata enrichment ecosystem (SMESE) for Digital Libraries The Semantic Metadata Enrichments Software Ecosystem (SMESE V1) for Digital Libraries (DLs) proposed in this paper implements a Software Product Line Engineering (SPLE) process using a metadata-based software architecture approach. It integrates a components-based ecosystem, including metadata harvesting, text and data mining and machine learning models. SMESE V1 is based on a generic model for standardizing meta-entity metadata and a mapping ontology to support the harvesting of various types of documents and their metadata from the web, databases and linked open data. SMESE V1 supports a dynamic metadata-based configuration model using multiple thesauri. The proposed model defines rules-based crosswalks that create pathways to different sources of data and metadata. Each pathway checks the metadata source structure and performs data and metadata harvesting. SMESE V1 proposes a metadata model in six categories of metadata instead of the four currently proposed in the literature for DLs; this makes it possible to describe content by defined entity, thus increasing usability. In addition, to tackle the issue of varying degrees of depth, the proposed metadata model describes the most elementary aspects of a harvested entity. A mapping ontology model has been prototyped in SMESE V1 to identify specific text segments based on thesauri in order to enrich content metadata with topics and emotions; this mapping ontology also allows interoperability between existing metadata models. Contribution 2: Metadata enrichments ecosystem based on topics and interests The second contribution extends the original SMESE V1 proposed in Contribution 1. Contribution 2 proposes a set of topic- and interest-based content semantic enrichments. The improved prototype, SMESE V3 (see following figure), uses text analysis approaches for sentiment and emotion detection and provides machine learning models to create a semantically enriched repository, thus enabling topic- and interest-based search and discovery. SMESE V3 has been designed to find short descriptions in terms of topics, sentiments and emotions. It allows efficient processing of large collections while keeping the semantic and statistical relationships that are useful for tasks such as: 1. topic detection, 2. contents classification, 3. novelty detection, 4. text summarization, 5. similarity detection. Contribution 3: Metadata-based scientific assisted literature review The third contribution proposes an assisted literature review (ALR) prototype, STELLAR V1 (Semantic Topics Ecosystem Learning-based Literature Assisted Review), based on machine learning models and a semantic metadata ecosystem. Its purpose is to identify, rank and recommend relevant papers for a literature review (LR). This third prototype can assist researchers, in an iterative process, in finding, evaluating and annotating relevant papers harvested from different sources and input into the SMESE V3 platform, available at any time. The key elements and concepts of this prototype are: 1. text and data mining, 2. machine learning models, 3. classification models, 4. researchers annotations, 5. semantically enriched metadata. STELLAR V1 helps the researcher to build a list of relevant papers according to a selection of metadata related to the subject of the ALR. The following figure presents the model, the related machine learning models and the metadata ecosystem used to assist the researcher in the task of producing an ALR on a specific topic

    Word Representations for Emergent Communication and Natural Language Processing

    Get PDF
    The task of listing all semantic properties of a single word might seem manageable at first but as you unravel all the context dependent subtle variations in meaning that a word can encompass, you soon realize that precise mathematical definition of a word’s semantics is extremely difficult. In analogy, humans have no problem identifying their favorite pet in an image but the task of precisely defining how, is still beyond our capabilities. A solution that has proved effective in the visual domain is to solve the problem by learning abstract representations using machine learning. Inspired by the success of learned representations in computer vision, the line of work presented in this thesis will explore learned word representations in three different contexts. Starting in the domain of artificial languages, three computational frameworks for emergent communication between collaborating agents are developed in an attempt to study word representations that exhibit grounding of concepts. The first two are designed to emulate the natural development of discrete color words using deep reinforcement learning, and used to simulate the emergence of color terms that partition the continuous color spectra of visual light. The properties of the emerged color communication schema is compared to human languages to ensure its validity as a cognitive model, and subsequently the frameworks are utilized to explore central questions in cognitive science about universals in language within the semantic domain of color. Moving beyond the color domain, a third framework is developed for the less controlled environment of human faces and multi-step communication. Subsequently, as for the color domain we carefully analyze the semantic properties of the words emerged between the agents but in this case focusing on the grounding. Turning the attention to the empirical usefulness, different types of learned word representations are evaluated in the context of automatic document summarisation, word sense disambiguation, and word sense induction with results that show great potential for learned word representations in natural language processing by reaching state-of-the-art performance in all applications and outperforming previous methods in two out of three applications. Finally, although learned word representations seem to improve the performance of real world systems, they do also lack in interpretability when compared to classical hand-engineered representations. Acknowledging this, an effort is made towards construct- ing learned representations that regain some of that interpretability by designing and evaluating disentangled representations, which could be used to represent words in a more interpretable way in the future

    Resource Description and Selection for Similarity Search in Metric Spaces: Problems and Problem-Solving Approaches

    Get PDF
    In times of an ever increasing amount of data and a growing diversity of data types in different application contexts, there is a strong need for large-scale and flexible indexing and search techniques. Metric access methods (MAMs) provide this flexibility, because they only assume that the dissimilarity between two data objects is modeled by a distance metric. Furthermore, scalable solutions can be built with the help of distributed MAMs. Both IF4MI and RS4MI, which are presented in this thesis, represent metric access methods. IF4MI belongs to the group of centralized MAMs. It is based on an inverted file and thus offers a hybrid access method providing text retrieval capabilities in addition to content-based search in arbitrary metric spaces. In opposition to IF4MI, RS4MI is a distributed MAM based on resource description and selection techniques. Here, data objects are physically distributed. However, RS4MI is by no means restricted to a certain type of distributed information retrieval system. Various application fields for the resource description and selection techniques are possible, for example in the context of visual analytics. Due to the metric space assumption, possible application fields go far beyond content-based image retrieval applications which provide the example scenario here.Ständig zunehmende Datenmengen und eine immer größer werdende Vielfalt an Datentypen in verschiedenen Anwendungskontexten erfordern sowohl skalierbare als auch flexible Indexierungs- und Suchtechniken. Metrische Zugriffsstrukturen (MAMs: metric access methods) können diese Flexibilität bieten, weil sie lediglich unterstellen, dass die Distanz zwischen zwei Datenobjekten durch eine Distanzmetrik modelliert wird. Darüber hinaus lassen sich skalierbare Lösungen mit Hilfe verteilter MAMs entwickeln. Sowohl IF4MI als auch RS4MI, die beide in dieser Arbeit vorgestellt werden, stellen metrische Zugriffsstrukturen dar. IF4MI gehört zur Gruppe der zentralisierten MAMs. Diese Zugriffsstruktur basiert auf einer invertierten Liste und repräsentiert daher eine hybride Indexstruktur, die neben einer inhaltsbasierten Ähnlichkeitssuche in beliebigen metrischen Räumen direkt auch Möglichkeiten der Textsuche unterstützt. Im Gegensatz zu IF4MI handelt es sich bei RS4MI um eine verteilte MAM, die auf Techniken der Ressourcenbeschreibung und -auswahl beruht. Dabei sind die Datenobjekte physisch verteilt. RS4MI ist jedoch keineswegs auf die Anwendung in einem bestimmten verteilten Information-Retrieval-System beschränkt. Verschiedene Anwendungsfelder sind für die Techniken zur Ressourcenbeschreibung und -auswahl denkbar, zum Beispiel im Bereich der Visuellen Analyse. Dabei gehen Anwendungsmöglichkeiten weit über den für die Arbeit unterstellten Anwendungskontext der inhaltsbasierten Bildsuche hinaus

    Vermeidung von Repräsentationsheterogenitäten in realweltlichen Wissensgraphen

    Get PDF
    Knowledge graphs are repositories providing factual knowledge about entities. They are a great source of knowledge to support modern AI applications for Web search, question answering, digital assistants, and online shopping. The advantages of machine learning techniques and the Web's growth have led to colossal knowledge graphs with billions of facts about hundreds of millions of entities collected from a large variety of sources. While integrating independent knowledge sources promises rich information, it inherently leads to heterogeneities in representation due to a large variety of different conceptualizations. Thus, real-world knowledge graphs are threatened in their overall utility. Due to their sheer size, they are hardly manually curatable anymore. Automatic and semi-automatic methods are needed to cope with these vast knowledge repositories. We first address the general topic of representation heterogeneity by surveying the problem throughout various data-intensive fields: databases, ontologies, and knowledge graphs. Different techniques for automatically resolving heterogeneity issues are presented and discussed, while several open problems are identified. Next, we focus on entity heterogeneity. We show that automatic matching techniques may run into quality problems when working in a multi-knowledge graph scenario due to incorrect transitive identity links. We present four techniques that can be used to improve the quality of arbitrary entity matching tools significantly. Concerning relation heterogeneity, we show that synonymous relations in knowledge graphs pose several difficulties in querying. Therefore, we resolve these heterogeneities with knowledge graph embeddings and by Horn rule mining. All methods detect synonymous relations in knowledge graphs with high quality. Furthermore, we present a novel technique for avoiding heterogeneity issues at query time using implicit knowledge storage. We show that large neural language models are a valuable source of knowledge that is queried similarly to knowledge graphs already solving several heterogeneity issues internally.Wissensgraphen sind eine wichtige Datenquelle von Entitätswissen. Sie unterstützen viele moderne KI-Anwendungen. Dazu gehören unter anderem Websuche, die automatische Beantwortung von Fragen, digitale Assistenten und Online-Shopping. Neue Errungenschaften im maschinellen Lernen und das außerordentliche Wachstum des Internets haben zu riesigen Wissensgraphen geführt. Diese umfassen häufig Milliarden von Fakten über Hunderte von Millionen von Entitäten; häufig aus vielen verschiedenen Quellen. Während die Integration unabhängiger Wissensquellen zu einer großen Informationsvielfalt führen kann, führt sie inhärent zu Heterogenitäten in der Wissensrepräsentation. Diese Heterogenität in den Daten gefährdet den praktischen Nutzen der Wissensgraphen. Durch ihre Größe lassen sich die Wissensgraphen allerdings nicht mehr manuell bereinigen. Dafür werden heutzutage häufig automatische und halbautomatische Methoden benötigt. In dieser Arbeit befassen wir uns mit dem Thema Repräsentationsheterogenität. Wir klassifizieren Heterogenität entlang verschiedener Dimensionen und erläutern Heterogenitätsprobleme in Datenbanken, Ontologien und Wissensgraphen. Weiterhin geben wir einen knappen Überblick über verschiedene Techniken zur automatischen Lösung von Heterogenitätsproblemen. Im nächsten Kapitel beschäftigen wir uns mit Entitätsheterogenität. Wir zeigen Probleme auf, die in einem Multi-Wissensgraphen-Szenario aufgrund von fehlerhaften transitiven Links entstehen. Um diese Probleme zu lösen stellen wir vier Techniken vor, mit denen sich die Qualität beliebiger Entity-Alignment-Tools deutlich verbessern lässt. Wir zeigen, dass Relationsheterogenität in Wissensgraphen zu Problemen bei der Anfragenbeantwortung führen kann. Daher entwickeln wir verschiedene Methoden um synonyme Relationen zu finden. Eine der Methoden arbeitet mit hochdimensionalen Wissensgrapheinbettungen, die andere mit einem Rule Mining Ansatz. Beide Methoden können synonyme Relationen in Wissensgraphen mit hoher Qualität erkennen. Darüber hinaus stellen wir eine neuartige Technik zur Vermeidung von Heterogenitätsproblemen vor, bei der wir eine implizite Wissensrepräsentation verwenden. Wir zeigen, dass große neuronale Sprachmodelle eine wertvolle Wissensquelle sind, die ähnlich wie Wissensgraphen angefragt werden können. Im Sprachmodell selbst werden bereits viele der Heterogenitätsprobleme aufgelöst, so dass eine Anfrage heterogener Wissensgraphen möglich wird

    Annotation-based storage and retrieval of models and simulation descriptions in computational biology

    Get PDF
    This work aimed at enhancing reuse of computational biology models by identifying and formalizing relevant meta-information. One type of meta-information investigated in this thesis is experiment-related meta-information attached to a model, which is necessary to accurately recreate simulations. The main results are: a detailed concept for model annotation, a proposed format for the encoding of simulation experiment setups, a storage solution for standardized model representations and the development of a retrieval concept.Die vorliegende Arbeit widmete sich der besseren Wiederverwendung biologischer Simulationsmodelle. Ziele waren die Identifikation und Formalisierung relevanter Modell-Meta-Informationen, sowie die Entwicklung geeigneter Modellspeicherungs- und Modellretrieval-Konzepte. Wichtigste Ergebnisse der Arbeit sind ein detailliertes Modellannotationskonzept, ein Formatvorschlag für standardisierte Kodierung von Simulationsexperimenten in XML, eine Speicherlösung für Modellrepräsentationen sowie ein Retrieval-Konzept

    Jointly integrating current context and social influence for improving recommendation

    Get PDF
    La diversité des contenus recommandation et la variation des contextes des utilisateurs rendent la prédiction en temps réel des préférences des utilisateurs de plus en plus difficile mettre en place. Toutefois, la plupart des approches existantes n'utilisent que le temps et l'emplacement actuels séparément et ignorent d'autres informations contextuelles sur lesquelles dépendent incontestablement les préférences des utilisateurs (par exemple, la météo, l'occasion). En outre, ils ne parviennent pas considérer conjointement ces informations contextuelles avec les interactions sociales entre les utilisateurs. D'autre part, la résolution de problèmes classiques de recommandation (par exemple, aucun programme de télévision vu par un nouvel utilisateur connu sous le nom du problème de démarrage froid et pas assez d'items co-évalués par d'autres utilisateurs ayant des préférences similaires, connu sous le nom du problème de manque de donnes) est d'importance significative puisque sont attaqués par plusieurs travaux. Dans notre travail de thèse, nous proposons un modèle probabiliste qui permet exploiter conjointement les informations contextuelles actuelles et l'influence sociale afin d'améliorer la recommandation des items. En particulier, le modèle probabiliste vise prédire la pertinence de contenu pour un utilisateur en fonction de son contexte actuel et de son influence sociale. Nous avons considérer plusieurs éléments du contexte actuel des utilisateurs tels que l'occasion, le jour de la semaine, la localisation et la météo. Nous avons utilisé la technique de lissage Laplace afin d'éviter les fortes probabilités. D'autre part, nous supposons que l'information provenant des relations sociales a une influence potentielle sur les préférences des utilisateurs. Ainsi, nous supposons que l'influence sociale dépend non seulement des évaluations des amis mais aussi de la similarité sociale entre les utilisateurs. Les similarités sociales utilisateur-ami peuvent être établies en fonction des interactions sociales entre les utilisateurs et leurs amis (par exemple les recommandations, les tags, les commentaires). Nous proposons alors de prendre en compte l'influence sociale en fonction de la mesure de similarité utilisateur-ami afin d'estimer les préférences des utilisateurs. Nous avons mené une série d'expérimentations en utilisant un ensemble de donnes réelles issues de la plateforme de TV sociale Pinhole. Cet ensemble de donnes inclut les historiques d'accès des utilisateurs-vidéos et les réseaux sociaux des téléspectateurs. En outre, nous collectons des informations contextuelles pour chaque historique d'accès utilisateur-vidéo saisi par le système de formulaire plat. Le système de la plateforme capture et enregistre les dernières informations contextuelles auxquelles le spectateur est confronté en regardant une telle vidéo.Dans notre évaluation, nous adoptons le filtrage collaboratif axé sur le temps, le profil dépendant du temps et la factorisation de la matrice axe sur le réseau social comme tant des modèles de référence. L'évaluation a port sur deux tâches de recommandation. La première consiste sélectionner une liste trie de vidéos. La seconde est la tâche de prédiction de la cote vidéo. Nous avons évalué l'impact de chaque élément du contexte de visualisation dans la performance de prédiction. Nous testons ainsi la capacité de notre modèle résoudre le problème de manque de données et le problème de recommandation de démarrage froid du téléspectateur. Les résultats expérimentaux démontrent que notre modèle surpasse les approches de l'état de l'art fondes sur le facteur temps et sur les réseaux sociaux. Dans les tests des problèmes de manque de donnes et de démarrage froid, notre modèle renvoie des prédictions cohérentes différentes valeurs de manque de données.Due to the diversity of alternative contents to choose and the change of users' preferences, real-time prediction of users' preferences in certain users' circumstances becomes increasingly hard for recommender systems. However, most existing context-aware approaches use only current time and location separately, and ignore other contextual information on which users' preferences may undoubtedly depend (e.g. weather, occasion). Furthermore, they fail to jointly consider these contextual information with social interactions between users. On the other hand, solving classic recommender problems (e.g. no seen items by a new user known as cold start problem, and no enough co-rated items with other users with similar preference as sparsity problem) is of significance importance since it is drawn by several works. In our thesis work, we propose a context-based approach that leverages jointly current contextual information and social influence in order to improve items recommendation. In particular, we propose a probabilistic model that aims to predict the relevance of items in respect with the user's current context. We considered several current context elements such as time, location, occasion, week day, location and weather. In order to avoid strong probabilities which leads to sparsity problem, we used Laplace smoothing technique. On the other hand, we argue that information from social relationships has potential influence on users' preferences. Thus, we assume that social influence depends not only on friends' ratings but also on social similarity between users. We proposed a social-based model that estimates the relevance of an item in respect with the social influence around the user on the relevance of this item. The user-friend social similarity information may be established based on social interactions between users and their friends (e.g. recommendations, tags, comments). Therefore, we argue that social similarity could be integrated using a similarity measure. Social influence is then jointly integrated based on user-friend similarity measure in order to estimate users' preferences. We conducted a comprehensive effectiveness evaluation on real dataset crawled from Pinhole social TV platform. This dataset includes viewer-video accessing history and viewers' friendship networks. In addition, we collected contextual information for each viewer-video accessing history captured by the plat form system. The platform system captures and records the last contextual information to which the viewer is faced while watching such a video. In our evaluation, we adopt Time-aware Collaborative Filtering, Time-Dependent Profile and Social Network-aware Matrix Factorization as baseline models. The evaluation focused on two recommendation tasks. The first one is the video list recommendation task and the second one is video rating prediction task. We evaluated the impact of each viewing context element in prediction performance. We tested the ability of our model to solve data sparsity and viewer cold start recommendation problems. The experimental results highlighted the effectiveness of our model compared to the considered baselines. Experimental results demonstrate that our approach outperforms time-aware and social network-based approaches. In the sparsity and cold start tests, our approach returns consistently accurate predictions at different values of data sparsity

    The semantic transparency of English compound nouns

    Get PDF
    What is semantic transparency, why is it important, and which factors play a role in its assessment? This work approaches these questions by investigating English compound nouns. The first part of the book gives an overview of semantic transparency in the analysis of compound nouns, discussing its role in models of morphological processing and differentiating it from related notions. After a chapter on the semantic analysis of complex nominals, it closes with a chapter on previous attempts to model semantic transparency. The second part introduces new empirical work on semantic transparency, introducing two different sets of statistical models for compound transparency. In particular, two semantic factors were explored: the semantic relations holding between compound constituents and the role of different readings of the constituents and the whole compound, operationalized in terms of meaning shifts and in terms of the distribution of specifc readings across constituent families. All semantic annotations used in the book are freely available

    The semantic transparency of English compound nouns

    Get PDF
    What is semantic transparency, why is it important, and which factors play a role in its assessment? This work approaches these questions by investigating English compound nouns. The first part of the book gives an overview of semantic transparency in the analysis of compound nouns, discussing its role in models of morphological processing and differentiating it from related notions. After a chapter on the semantic analysis of complex nominals, it closes with a chapter on previous attempts to model semantic transparency. The second part introduces new empirical work on semantic transparency, introducing two different sets of statistical models for compound transparency. In particular, two semantic factors were explored: the semantic relations holding between compound constituents and the role of different readings of the constituents and the whole compound, operationalized in terms of meaning shifts and in terms of the distribution of specifc readings across constituent families

    Multikonferenz Wirtschaftsinformatik (MKWI) 2016: Technische Universität Ilmenau, 09. - 11. März 2016; Band I

    Get PDF
    Übersicht der Teilkonferenzen Band I: • 11. Konferenz Mobilität und Digitalisierung (MMS 2016) • Automated Process und Service Management • Business Intelligence, Analytics und Big Data • Computational Mobility, Transportation and Logistics • CSCW & Social Computing • Cyber-Physische Systeme und digitale Wertschöpfungsnetzwerke • Digitalisierung und Privacy • e-Commerce und e-Business • E-Government – Informations- und Kommunikationstechnologien im öffentlichen Sektor • E-Learning und Lern-Service-Engineering – Entwicklung, Einsatz und Evaluation technikgestützter Lehr-/Lernprozess
    corecore