8 research outputs found

    Obtener un método para la extracción de información a partir de documentos semiestructurados producidos al interior del Servicio Nacional de Aprendizaje SENA, permitiendo su publicación, reutilización e intercambio a través de la web semántica

    Get PDF
    Actualmente en el Servicio Nacional de Aprendizaje SENA, existen gran cantidad de archivos, los cuales contienen información textual de manera semiestructurada, lo cual dificulta realizar consultas SQL complejas sobre la información allí contenida, impidiendo que esta información pueda ser utilizada de manera activa al interior de la Entidad. Aunque actualmente la entidad posee un avanzado gestor documental, el cual se encarga de gestionar, almacenar e indexar los documentos producidos por procesos realizados al interior de la entidad, la información que se puede extraer de los mismos es bastante limitada, obligando en muchas ocasiones a abrir el documento para poder observar con mayor detalle el contenido en su interior. Además la indexación de estos documentos, en la mayoría de los casos se realiza 100% manual, lo que expone a la entidad a errores humanos debidos a los altos volúmenes de documentos generados, así como a las múltiples fuentes que los generan; Esto impide que la información histórica contenida en estos documentos sea utilizada eficazmente como soporte en la toma de decisiones de la entidad. Para dar una alternativa de solucion a este problema es necesario construir una base de conocimiento siguiendo la estructura y los lineamientos de datos enlazados, que permitan que esta información relevante pueda ser publicada, consultada y usada como insumo vital en la toma de decisiones al interior de la entidad. Para esto durante el desarrollo de este trabajo se pretende obtener un método para la extracción de información a partir de documentos semiestructurados producidos al interior del Servicio Nacional de Aprendizaje SENA, Este método será plasmado en un prototipo que permitirá extraer la información necesaria mediante cuatro fases que abarcan desde la Extracción de Información hasta la fase de Persistencia de conocimiento, de manera que sea posible inferir la información requerida.Abstract. Now in the Servicio Nacional de Aprendizaje SENA, there are lots of files, which contain textual semi-structured information, making it difficult to perform complex SQL queries about the information contained therein, preventing this information can be actively used inside SENA. Although the company now has an advanced document management system, which is responsible for managing, storing and indexing the documents produced by processes performed inside SENA, the information can be extracted from them is very limited, forcing many times to open the document to observe in detail the contents inside. Moreover indexing of these documents, in most cases 100% manually, which exposes the entity to human error due to high volumes of documents generated, as well as multiple sources that generate performed, this prevents the historical information contained in these documents to be used effectively as a support in the decision making in the organization. To give an alternative solution to this problem is necessary to build a knowledge base following the structure and guidelines linked data, which allow this relevant information can be posted, accessed and used as vital input in decision making inside the entity. For this during the development of this work it is to obtain a method for extracting information from semi-structured documents produced inside SENA, This method is embodied in a prototype which will extract the necessary information through four stages ranging from extraction to the phase information persistence of knowledge, so that it is possible to infer the required informationMaestrí

    RDF graph summarization: principles, techniques and applications (tutorial)

    Get PDF
    International audienceThe explosion in the amount of the RDF on the Web has lead to the need to explore, query and understand such data sources. The task is challenging due to the complex and heterogeneous structure of RDF graphs which, unlike relational databases, do not come with a structure-dictating schema. Summarization has been applied to RDF data to facilitate these tasks. Its purpose is to extract concise and meaningful information from RDF knowledge bases, representing their content as faithfully as possible. There is no single concept of RDF summary, and not a single but many approaches to build such summaries; the summarization goal, and the main computational tools employed for summarizing graphs, are the main factors behind this diversity. This tutorial presents a structured analysis and comparison existing works in the area of RDF summarization; it is based upon a recent survey which we co-authored with colleagues [3]. We present the concepts at the core of each approach, outline their main technical aspects and implementation. We conclude by identifying the most pertinent summarization method for different usage scenarios, and discussing areas where future effort is needed

    Belief Revision in Expressive Knowledge Representation Formalisms

    Get PDF
    We live in an era of data and information, where an immeasurable amount of discoveries, findings, events, news, and transactions are generated every second. Governments, companies, or individuals have to employ and process all that data for knowledge-based decision-making (i.e. a decision-making process that uses predetermined criteria to measure and ensure the optimal outcome for a specific topic), which then prompt them to view the knowledge as valuable resource. In this knowledge-based view, the capability to create and utilize knowledge is the key source of an organization or individual’s competitive advantage. This dynamic nature of knowledge leads us to the study of belief revision (or belief change), an area which emerged from work in philosophy and then impacted further developments in computer science and artificial intelligence. In belief revision area, the AGM postulates by Alchourrón, Gärdenfors, and Makinson continue to represent a cornerstone in research related to belief change. Katsuno and Mendelzon (K&M) adopted the AGM postulates for changing belief bases and characterized AGM belief base revision in propositional logic over finite signatures. In this thesis, two research directions are considered. In the first, by considering the semantic point of view, we generalize K&M’s approach to the setting of (multiple) base revision in arbitrary Tarskian logics, covering all logics with a classical model-theoretic semantics and hence a wide variety of logics used in knowledge representation and beyond. Our generic formulation applies to various notions of “base”, such as belief sets, arbitrary or finite sets of sentences, or single sentences. The core result is a representation theorem showing a two-way correspondence between AGM base revision operators and certain “assignments”: functions mapping belief bases to total — yet not transitive — “preference” relations between interpretations. Alongside, we present a companion result for the case when the AGM postulate of syntax-independence is abandoned. We also provide a characterization of all logics for which our result can be strengthened to assignments producing transitive preference relations (as in K&M’s original work), giving rise to two more representation theorems for such logics, according to syntax dependence vs. independence. The second research direction in this thesis explores two approaches for revising description logic knowledge bases under fixed-domain semantics, namely model-based approach and individual-based approach. In this logical setting, models of the knowledge bases can be enumerated and can be computed to produce the revision result, semantically. We show a characterization of the AGM revision operator for this logic and present a concrete model-based revision approach via distance between interpretations. In addition, by weakening the KB based on certain domain elements, a novel individual-based revision operator is provided as an alternative approach

    Visual exploration of semantic-web-based knowledge structures

    Get PDF
    Humans have a curious nature and seek a better understanding of the world. Data, in- formation, and knowledge became assets of our modern society through the information technology revolution in the form of the internet. However, with the growing size of accumulated data, new challenges emerge, such as searching and navigating in these large collections of data, information, and knowledge. The current developments in academic and industrial contexts target the corresponding challenges using Semantic Web techno- logies. The Semantic Web is an extension of the Web and provides machine-readable representations of knowledge for various domains. These machine-readable representations allow intelligent machine agents to understand the meaning of the data and information; and enable additional inference of new knowledge. Generally, the Semantic Web is designed for information exchange and its processing and does not focus on presenting such semantically enriched data to humans. Visualizations support exploration, navigation, and understanding of data by exploiting humans’ ability to comprehend complex data through visual representations. In the context of Semantic- Web-Based knowledge structures, various visualization methods and tools are available, and new ones are being developed every year. However, suitable visualizations are highly dependent on individual use cases and targeted user groups. In this thesis, we investigate visual exploration techniques for Semantic-Web-Based knowledge structures by addressing the following challenges: i) how to engage various user groups in modeling such semantic representations; ii) how to facilitate understanding using customizable visual representations; and iii) how to ease the creation of visualizations for various data sources and different use cases. The achieved results indicate that visual modeling techniques facilitate the engagement of various user groups in ontology modeling. Customizable visualizations enable users to adjust visualizations to the current needs and provide different views on the data. Additionally, customizable visualization pipelines enable rapid visualization generation for various use cases, data sources, and user group

    Keyword-Based Querying for the Social Semantic Web

    Get PDF
    Enabling non-experts to publish data on the web is an important achievement of the social web and one of the primary goals of the social semantic web. Making the data easily accessible in turn has received only little attention, which is problematic from the point of view of incentives: users are likely to be less motivated to participate in the creation of content if the use of this content is mostly reserved to experts. Querying in semantic wikis, for example, is typically realized in terms of full text search over the textual content and a web query language such as SPARQL for the annotations. This approach has two shortcomings that limit the extent to which data can be leveraged by users: combined queries over content and annotations are not possible, and users either are restricted to expressing their query intent using simple but vague keyword queries or have to learn a complex web query language. The work presented in this dissertation investigates a more suitable form of querying for semantic wikis that consolidates two seemingly conflicting characteristics of query languages, ease of use and expressiveness. This work was carried out in the context of the semantic wiki KiWi, but the underlying ideas apply more generally to the social semantic and social web. We begin by defining a simple modular conceptual model for the KiWi wiki that enables rich and expressive knowledge representation. A component of this model are structured tags, an annotation formalism that is simple yet flexible and expressive, and aims at bridging the gap between atomic tags and RDF. The viability of the approach is confirmed by a user study, which finds that structured tags are suitable for quickly annotating evolving knowledge and are perceived well by the users. The main contribution of this dissertation is the design and implementation of KWQL, a query language for semantic wikis. KWQL combines keyword search and web querying to enable querying that scales with user experience and information need: basic queries are easy to express; as the search criteria become more complex, more expertise is needed to formulate the corresponding query. A novel aspect of KWQL is that it combines both paradigms in a bottom-up fashion. It treats neither of the two as an extension to the other, but instead integrates both in one framework. The language allows for rich combined queries of full text, metadata, document structure, and informal to formal semantic annotations. KWilt, the KWQL query engine, provides the full expressive power of first-order queries, but at the same time can evaluate basic queries at almost the speed of the underlying search engine. KWQL is accompanied by the visual query language visKWQL, and an editor that displays both the textual and visual form of the current query and reflects changes to either representation in the other. A user study shows that participants quickly learn to construct KWQL and visKWQL queries, even when given only a short introduction. KWQL allows users to sift the wealth of structure and annotations in an information system for relevant data. If relevant data constitutes a substantial fraction of all data, ranking becomes important. To this end, we propose PEST, a novel ranking method that propagates relevance among structurally related or similarly annotated data. Extensive experiments, including a user study on a real life wiki, show that pest improves the quality of the ranking over a range of existing ranking approaches
    corecore