1,790 research outputs found
DBpedia's triple pattern fragments: usage patterns and insights
Queryable Linked Data is published through several interfaces, including SPARQL endpoints and Linked Data documents. In October 2014, the DBpedia Association announced an official Triple Pattern Fragments interface to its popular DBpedia dataset. This interface proposes to improve the availability of live queryable data by dividing query execution between clients and servers. In this paper, we present a usage analysis between November 2014 and July 2015. In 9 months time, the interface had an average availability of 99.99 %, handling 16,776,170 requests, 43.0% of which were served from cache. These numbers provide promising evidence that low-cost Triple Pattern Fragments interfaces provide a viable strategy for live applications on top of public, queryable datasets
Type-Constrained Representation Learning in Knowledge Graphs
Large knowledge graphs increasingly add value to various applications that
require machines to recognize and understand queries and their semantics, as in
search or question answering systems. Latent variable models have increasingly
gained attention for the statistical modeling of knowledge graphs, showing
promising results in tasks related to knowledge graph completion and cleaning.
Besides storing facts about the world, schema-based knowledge graphs are backed
by rich semantic descriptions of entities and relation-types that allow
machines to understand the notion of things and their semantic relationships.
In this work, we study how type-constraints can generally support the
statistical modeling with latent variable models. More precisely, we integrated
prior knowledge in form of type-constraints in various state of the art latent
variable approaches. Our experimental results show that prior knowledge on
relation-types significantly improves these models up to 77% in link-prediction
tasks. The achieved improvements are especially prominent when a low model
complexity is enforced, a crucial requirement when these models are applied to
very large datasets. Unfortunately, type-constraints are neither always
available nor always complete e.g., they can become fuzzy when entities lack
proper typing. We show that in these cases, it can be beneficial to apply a
local closed-world assumption that approximates the semantics of relation-types
based on observations made in the data
Linked Data - the story so far
The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward
Information Session
My work concerns the divergent narratives created by fusing varied, often conflicting, textures, colors and fabrics, into a tenuous order. I intend for these otherwise clashing materials to create drama that is simultaneously enthusiastic, epic and ambiguous. While this media\u27s formal properties re an important component of my work, the material\u27s cultural and art-historical associations are also are critical ingredient. In this thesis, I will explore the use of the varied collage material, hierarchical compositions and contemporary influence of 19th Century Romantic themes as they relate to forming a variety of distinctly contemporary narratives in my compositions. I will investigate how my artistic point-of-view is informed by art-history, irony and the work of contemporary painters. Finally, I will discuss how my work engages a contemporary version of the Sublime
Lock-in effects in competitive bidding schemes for payments for ecosystem services: Revisiting the fundamental transformation
Competitive bidding is considered to be a cost-effective allocation mechanism for payments for ecosystem services. This article shows that competition is not a necessary condition for sustaining cost-effectiveness in the long run. In a repeated conservation auction, learning, specific investments and the creation of social capital bias the chances of winning a follow-up contract in favour of former auction winners. Applying the concept of fundamental transformation (Williamson 1985), we argue that this asymmetry weakens competition and leads to lock-in effects between the auctioning agency and a stable pool of sellers with uncertain consequences for cost-effectiveness. We compare data from two laboratory experiments on auction-based conservation programmes and show under which conditions lock-in effects are likely to occur in a controlled environment. Our findings demonstrate lock-in effects do not erode the effectiveness of an auction but change the rules of the game towards more favourable conditions for the provision of the targeted good or service. In view of the empirical evidence for a superior performance of long-term contract relationships compared to low-cost short-term contracting, we discuss directions for follow-up empirical work
Using ChatGPT for Entity Matching
Entity Matching is the task of deciding if two entity descriptions refer to
the same real-world entity. State-of-the-art entity matching methods often rely
on fine-tuning Transformer models such as BERT or RoBERTa. Two major drawbacks
of using these models for entity matching are that (i) the models require
significant amounts of fine-tuning data for reaching a good performance and
(ii) the fine-tuned models are not robust concerning out-of-distribution
entities. In this paper, we investigate using ChatGPT for entity matching as a
more robust, training data-efficient alternative to traditional Transformer
models. We perform experiments along three dimensions: (i) general prompt
design, (ii) in-context learning, and (iii) provision of higher-level matching
knowledge. We show that ChatGPT is competitive with a fine-tuned RoBERTa model,
reaching an average zero-shot performance of 83% F1 on a challenging matching
task on which RoBERTa requires 2000 training examples for reaching a similar
performance. Adding in-context demonstrations to the prompts further improves
the F1 by up to 5% even using only a small set of 20 handpicked examples.
Finally, we show that guiding the zero-shot model by stating higher-level
matching rules leads to similar gains as providing in-context examples
Innovationen im Kontext von Nachhaltigkeit
Innovationen sind das Kernelement des Überlebens und der Positionierung von Volkswirtschaften, da sie zur Befriedigung der Marktbedürfnisse beitragen. Vor dem Hintergrund knapper Ressourcen werden jedoch regelmäßig nur die Innovationen realisiert, die aus einer betriebswirtschaftlichen Betrachtung vorteilhaft erscheinen und damit weniger als eigentlich sinnvoll wären. Zunehmend setzt sich allerdings die Erkenntnis durch, dass die betriebswirtschaftliche Marktabgrenzung und damit die traditionelle Wirtschaftlichkeitsbewertung von Innovationen und Innovationsprojekten zu eng sind. Vielmehr erfordert die heutige Innovationsbewertung eine Erweiterung um soziale und ökologische Aspekt, also eine nachhaltige Ausrichtung. Basierend auf dem Forschungsprojekt Nachhaltigkeitsorientierte Bewertung von Innovationsprojekten (NaBI) gibt dieses Papier einen Überblick über das Forschungsfeld und analysiert die Nachhaltigkeitstheorie im Hinblick auf Innovationen. Im Ergebnis wird die Festlegung auf ein bestimmtes Indikatorset für die jeweilige Nachhaltigkeitsebene (Satellitensystem) vorgeschlagen, in denen essenzielle Bestandteile mit festgelegten Grenzwerten vorgegeben sind
- …