7,534 research outputs found

    Knowledge Organization Systems (KOS) in the Semantic Web: A Multi-Dimensional Review

    Full text link
    Since the Simple Knowledge Organization System (SKOS) specification and its SKOS eXtension for Labels (SKOS-XL) became formal W3C recommendations in 2009 a significant number of conventional knowledge organization systems (KOS) (including thesauri, classification schemes, name authorities, and lists of codes and terms, produced before the arrival of the ontology-wave) have made their journeys to join the Semantic Web mainstream. This paper uses "LOD KOS" as an umbrella term to refer to all of the value vocabularies and lightweight ontologies within the Semantic Web framework. The paper provides an overview of what the LOD KOS movement has brought to various communities and users. These are not limited to the colonies of the value vocabulary constructors and providers, nor the catalogers and indexers who have a long history of applying the vocabularies to their products. The LOD dataset producers and LOD service providers, the information architects and interface designers, and researchers in sciences and humanities, are also direct beneficiaries of LOD KOS. The paper examines a set of the collected cases (experimental or in real applications) and aims to find the usages of LOD KOS in order to share the practices and ideas among communities and users. Through the viewpoints of a number of different user groups, the functions of LOD KOS are examined from multiple dimensions. This paper focuses on the LOD dataset producers, vocabulary producers, and researchers (as end-users of KOS).Comment: 31 pages, 12 figures, accepted paper in International Journal on Digital Librarie

    Constrained tGAP for generalisation between scales: the case of Dutch topographic data

    Get PDF
    This article presents the results of integrating large- and medium-scale data into a unified data structure. This structure can be used as a single non-redundant representation for the input data, which can be queried at any arbitrary scale between the source scales. The solution is based on the constrained topological Generalized Area Partition (tGAP), which stores the results of a generalization process applied to the large-scale dataset, and is controlled by the objects of the medium-scale dataset, which act as constraints on the large-scale objects. The result contains the accurate geometry of the large-scale objects enriched with the generalization knowledge of the medium-scale data, stored as references in the constraint tGAP structure. The advantage of this constrained approach over the original tGAP is the higher quality of the aggregated maps. The idea was implemented with real topographic datasets from The Netherlands for the large- (1:1000) and medium-scale (1:10,000) data. The approach is expected to be equally valid for any categorical map and for other scales as well

    Quality Assessment of Linked Datasets using Probabilistic Approximation

    Full text link
    With the increasing application of Linked Open Data, assessing the quality of datasets by computing quality metrics becomes an issue of crucial importance. For large and evolving datasets, an exact, deterministic computation of the quality metrics is too time consuming or expensive. We employ probabilistic techniques such as Reservoir Sampling, Bloom Filters and Clustering Coefficient estimation for implementing a broad set of data quality metrics in an approximate but sufficiently accurate way. Our implementation is integrated in the comprehensive data quality assessment framework Luzzu. We evaluated its performance and accuracy on Linked Open Datasets of broad relevance.Comment: 15 pages, 2 figures, To appear in ESWC 2015 proceeding

    Gap analysis of nickel bioaccessibility and bioavailability in different food matrices and its impact on the nickel exposure assessment

    Get PDF
    The metal nickel is well known to cause nickel allergy in sensitive humans by prolonged dermal contact to materials releasing (high) amounts of nickel. Oral nickel exposure via water and food intake is of potential concern. Nickel is essential to plants and animals and can be naturally found in food products or contamination may occur across the agro-food chain. This gap analysis is an evaluation of nickel as a potential food safety hazard causing a risk for human health. In the first step, the available data regarding the occurrence of nickel and its contamination in food and drinks have been collected through literature review. Subsequently, a discussion is held on the potential risks associated with this contamination. Elevated nickel concentrations were mostly found in plant-based foods, e.g. legumes and nuts in which nickel of natural origin is expected. However, it was observed that dedicated and systematic screening of foodstuffs for the presence of nickel is currently still lacking. In a next step, published studies on exposure of humans to nickel via foods and drinks were critically evaluated. Not including bioaccessibility and/or bioavailability of the metal may lead to an overestimation of the exposure of the body to nickel via food and drinks. This overestimation may be problematic when the measured nickel level in foods is high and bioaccessibility and/or bioavailability of nickel in these products is low. Therefore, this paper analyzes the outcomes of the existing dietary intake and bioaccessibility/bioavailability studies conducted for nickel. Besides, the available gaps in nickel bioaccessibility and/or bioavailability studies have been clarified in this paper. The reported bioaccessibility and bioavailability percentages for different food and drinks were found to vary between < LOD and 83% and between 0 and 30% respectively. This indicates that of the total nickel contained in the foodstuffs only a fraction can be absorbed by the intestinal epithelium cells. This paper provides a unique critical overview on nickel in the human diet starting from factors affecting its occurrence in food until its absorption by the body

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and …);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting &amp; using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants

    Collaborative recommendations with content-based filters for cultural activities via a scalable event distribution platform

    Get PDF
    Nowadays, most people have limited leisure time and the offer of (cultural) activities to spend this time is enormous. Consequently, picking the most appropriate events becomes increasingly difficult for end-users. This complexity of choice reinforces the necessity of filtering systems that assist users in finding and selecting relevant events. Whereas traditional filtering tools enable e.g. the use of keyword-based or filtered searches, innovative recommender systems draw on user ratings, preferences, and metadata describing the events. Existing collaborative recommendation techniques, developed for suggesting web-shop products or audio-visual content, have difficulties with sparse rating data and can not cope at all with event-specific restrictions like availability, time, and location. Moreover, aggregating, enriching, and distributing these events are additional requisites for an optimal communication channel. In this paper, we propose a highly-scalable event recommendation platform which considers event-specific characteristics. Personal suggestions are generated by an advanced collaborative filtering algorithm, which is more robust on sparse data by extending user profiles with presumable future consumptions. The events, which are described using an RDF/OWL representation of the EventsML-G2 standard, are categorized and enriched via smart indexing and open linked data sets. This metadata model enables additional content-based filters, which consider event-specific characteristics, on the recommendation list. The integration of these different functionalities is realized by a scalable and extendable bus architecture. Finally, focus group conversations were organized with external experts, cultural mediators, and potential end-users to evaluate the event distribution platform and investigate the possible added value of recommendations for cultural participation

    A More Decentralized Vision for Linked Data

    Get PDF
    In this deliberately provocative position paper, we claim that ten years into Linked Data there are still (too?) many unresolved challenges towards arriving at a truly machine-readable and decentralized Web of data. We take a deeper look at the biomedical domain - currently, one of the most promising "adopters" of Linked Data - if we believe the ever-present "LOD cloud" diagram. Herein, we try to highlight and exemplify key technical and non-technical challenges to the success of LOD, and we outline potential solution strategies. We hope that this paper will serve as a discussion basis for a fresh start towards more actionable, truly decentralized Linked Data, and as a call to the community to join forces.Series: Working Papers on Information Systems, Information Business and Operation
    • …
    corecore