1,957 research outputs found

    Ontology mapping: the state of the art

    No full text
    Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping

    Ontology-based transformation of natural language queries into SPARQL queries by evolutionary algorithms

    Get PDF
    In dieser Arbeit wird ein ontologiegetriebenes evolutionäres Lernsystem für natürlichsprachliche Abfragen von RDF-Graphen vorgestellt. Das lernende System beantwortet die Anfrage nicht selbst, sondern generiert eine SPARQL-Abfrage gegen die Datenbank. Zu diesem Zweck wird das Evolutionary Dataflow Agents Framework eingeführt, ein allgemeines Lernsystem, das auf der Grundlage evolutionärer Algorithmen Agenten erzeugt, die lernen, ein Problem zu lösen. Die Hauptidee des Frameworks ist es, Probleme zu unterstützen, die einen mittelgroßen Suchraum (Anwendungsfall: Analyse von natürlichsprachlichen Abfragen) von streng formal strukturierten Lösungen (Anwendungsfall: Synthese von Datenbankabfragen) mit eher lokalen klassischen strukturellen und algorithmischen Aspekten kombinieren. Dabei kombinieren die Agenten lokale algorithmische Funktionalität von Knoten mit einem flexiblen Datenfluss zwischen den Knoten zu einem globalen Problemlösungsprozess. Grob gesagt gibt es Knoten, die Informationsfragmente generieren, indem sie Eingabedaten und/oder frühere Fragmente kombinieren, oft unter Verwendung von auf Heuristik basierenden Vermutungen. Andere Knoten kombinieren, sammeln und reduzieren solche Fragmente auf mögliche Lösungen und grenzen diese auf die endgültige Lösung ein. Zu diesem Zweck werden die Informationen von den Agenten weitergegeben. Die Konfiguration dieser Agenten, welche Knoten sie kombinieren und wohin genau die Daten fließen, ist Gegenstand des Lernens. Das Training beginnt mit einfachen Agenten, die - wie in Lern-Frameworks üblich - eine Reihe von Aufgaben lösen und dafür bewertet werden. Da die erzeugten Antworten in der Regel komplexe Strukturen aufweisen, setzt das Framework einen neuartigen feinkörnigen energiebasierten Bewertungs- und Auswahlschritt ein. Die ausgewählten Agenten bilden dann die Grundlage für die Population der nächsten Runde. Die Evolution wird wie üblich durch Mutationen und Agentenfusion gewährleistet. Als Anwendungsfall wurde EvolNLQ implementiert, ein System zur Beantwortung natürlichsprachlicher Abfragen gegen RDF-Datenbanken. Hierfür wird die zugrundeliegende Ontologie medatata (extern) algorithmisch vorverarbeitet. Für die Agenten werden geeignete Datenelementtypen und Knotentypen definiert, die die Prozesse der Sprachanalyse und der Anfragesynthese in mehr oder weniger elementare Operationen zerlegen. Die "Größe" der Operationen wird bestimmt durch die Grenze zwischen Berechnungen, d.h. rein algorithmischen Schritten (implementiert in einzelnen mächtigen Knoten) und einfachen heuristischen Schritten (ebenfalls realisiert durch einfache Knoten), und freiem Datenfluss, der beliebige Verkettungen und Verzweigungskonfigurationen der Agenten erlaubt. EvolNLQ wird mit einigen anderen Ansätzen verglichen und zeigt konkurrenzfähige Ergebnisse.In this thesis an ontology-driven evolutionary learning system for natural language querying of RDF graphs is presented. The learning system itself does not answer the query, but generates a SPARQL query against the database. For this purpose, the Evolutionary Dataflow Agents framework, a general learning framework is introduced that, based on evolutionary algorithms, creates agents that learn to solve a problem. The main idea of the framework is to support problems that combine a medium-sized search space (use case: analysis of natural language queries) of strictly, formally structured solutions (use case: synthesis of database queries), with rather local classical structural and algorithmic aspects. For this, the agents combine local algorithmic functionality of nodes with a flexible dataflow between the nodes to a global problem solving process. Roughly, there are nodes that generate informational fragments by combining input data and/or earlier fragments, often using heuristics-based guessing. Other nodes combine, collect, and reduce such fragments towards possible solutions, and narrowing these towards the unique final solution. For this, informational items are floating through the agents. The configuration of these agents, what nodes they combine, and where exactly the data items are flowing, is subject to learning. The training starts with simple agents, which –as usual in learning frameworks– solve a set of tasks, and are evaluated for it. Since the produced answers usually have complex structures answers, the framework employs a novel fine-grained energy-based evaluation and selection step. The selected agents then are the basis for the population of the next round. Evolution is provided as usual by mutations and agent fusion. As a use case, EvolNLQ has been implemented, a system for answering natural language queries against RDF databases. For this, the underlying ontology medatata is (externally) algorithmically preprocessed. For the agents, appropriate data item types and node types are defined that break down the processes of language analysis and query synthesis into more or less elementary operations. The "size" of operations is determined by the border between computations, i.e., purely algorithmic steps (implemented in individual powerful nodes) and simple heuristic steps (also realized by simple nodes), and free dataflow allowing for arbitrary chaining and branching configurations of the agents. EvolNLQ is compared with some other approaches, showing competitive results.2022-01-2

    Report 2011

    No full text

    Development of an intelligent information resource model based on modern natural language processing methods

    Get PDF
    Currently, there is an avalanche-like increase in the need for automatic text processing, respectively, new effective methods and tools for processing texts in natural language are emerging. Although these methods, tools and resources are mostly presented on the internet, many of them remain inaccessible to developers, since they are not systematized, distributed in various directories or on separate sites of both humanitarian and technical orientation. All this greatly complicates their search and practical use in conducting research in computational linguistics and developing applied systems for natural text processing. This paper is aimed at solving the need described above. The paper goal is to develop model of an intelligent information resource based on modern methods of natural language processing (IIR NLP). The main goal of IIR NLP is to render convenient valuable access for specialists in the field of computational linguistics. The originality of our proposed approach is that the developed ontology of the subject area “NLP” will be used to systematize all the above knowledge, data, information resources and organize meaningful access to them, and semantic web standards and technology tools will be used as a software basis

    Intelligent Systems

    Get PDF
    This book is dedicated to intelligent systems of broad-spectrum application, such as personal and social biosafety or use of intelligent sensory micro-nanosystems such as "e-nose", "e-tongue" and "e-eye". In addition to that, effective acquiring information, knowledge management and improved knowledge transfer in any media, as well as modeling its information content using meta-and hyper heuristics and semantic reasoning all benefit from the systems covered in this book. Intelligent systems can also be applied in education and generating the intelligent distributed eLearning architecture, as well as in a large number of technical fields, such as industrial design, manufacturing and utilization, e.g., in precision agriculture, cartography, electric power distribution systems, intelligent building management systems, drilling operations etc. Furthermore, decision making using fuzzy logic models, computational recognition of comprehension uncertainty and the joint synthesis of goals and means of intelligent behavior biosystems, as well as diagnostic and human support in the healthcare environment have also been made easier

    The Knowledge Level in Cognitive Architectures: Current Limitations and Possible Developments

    Get PDF
    In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge. We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build articial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the final part of the paper further directions of research will be explored, trying to address current limitations and future challenges
    corecore