41 research outputs found
Collaboratively Patching Linked Data
Today's Web of Data is noisy. Linked Data often needs extensive preprocessing
to enable efficient use of heterogeneous resources. While consistent and valid
data provides the key to efficient data processing and aggregation we are
facing two main challenges: (1st) Identification of erroneous facts and
tracking their origins in dynamically connected datasets is a difficult task,
and (2nd) efforts in the curation of deficient facts in Linked Data are
exchanged rather rarely. Since erroneous data often is duplicated and
(re-)distributed by mashup applications it is not only the responsibility of a
few original publishers to keep their data tidy, but progresses to be a mission
for all distributers and consumers of Linked Data too. We present a new
approach to expose and to reuse patches on erroneous data to enhance and to add
quality information to the Web of Data. The feasibility of our approach is
demonstrated by example of a collaborative game that patches statements in
DBpedia data and provides notifications for relevant changes.Comment: 2nd International Workshop on Usage Analysis and the Web of Data
(USEWOD2012) in the 21st International World Wide Web Conference (WWW2012),
Lyon, France, April 17th, 201
Evaluating Query and Storage Strategies for RDF Archives
There is an emerging demand on efficiently archiving and (temporal) querying different versions of evolving semantic Web data. As novel archiving systems are starting to address this challenge, foundations/standards for benchmarking RDF archives are needed to evaluate its storage space efficiency and the performance of different retrieval operations. To this end, we provide theoretical foundations on the design of data and queries to evaluate emerging RDF archiving systems. Then, we instantiate these foundations along a concrete set of queries on the basis of a real-world evolving dataset. Finally, we perform an empirical evaluation of various current archiving techniques and querying strategies on this data that is meant to serve as a baseline of future developments on querying archives of evolving RDF data
Attenuated Fatigue in Slow Twitch Skeletal Muscle during Isotonic Exercise in Rats with Chronic Heart Failure
During isometric contractions, slow twitch soleus muscles (SOL) from rats with chronic heart failure (chf) are more fatigable than those of sham animals. However, a muscle normally shortens during activity and fatigue development is highly task dependent. Therefore, we examined the development of skeletal muscle fatigue during shortening (isotonic) contractions in chf and sham-operated rats. Six weeks following coronary artery ligation, infarcted animals were classified as failing (chf) if left ventricle end diastolic pressure was >15mmHg. During isoflurane anaesthesia, SOL with intact blood supply was stimulated (1s on 1s off) at 30Hz for 15 min and allowed to shorten isotonically against a constant afterload. Muscle temperature was maintained at 37°C. In resting muscle, maximum isometric force (Fmax) and the concentrations of ATP and CrP were not different in the two groups. During stimulation, Fmax and the concentrations declined in parallel sham and chf. Fatigue, which was evident as reduced shortening during stimulation, was also not different in the two groups. The isometric force decline was fitted to a bi-exponential decay equation. Both time constants increased transiently and returned to initial values after approximately 200 s of the fatigue protocol. This resulted in a transient rise in baseline tension between stimulations, although this effect which was less prominent in chf than sham. Myosin light chain 2s phosphorylation declined in both groups after 100 s of isotonic contractions, and remained at this level throughout 15 min of stimulation. In spite of higher energy demand during isotonic than isometric contractions, both shortening capacity and rate of isometric force decline were as well or better preserved in fatigued SOL from chf rats than in sham. This observation is in striking contrast to previous reports which have employed isometric contractions to induce fatigue
Scheduling Refresh Queries for Keeping Results from a SPARQL Endpoint Up-to-Date
Many datasets change over time. As a consequence, long-running applications that cache and repeatedly use query results obtained from a SPARQL endpoint may resubmit the queries regularly to ensure up-to-dateness of the results. While this approach may be feasible if the number of such regular refresh queries is manageable, with an increasing number of applications adopting this approach, the SPARQL endpoint may become overloaded with such refresh queries. A more scalable approach would be to use a middle-ware component at which the applications register their queries and get notified with updated query results once the results have changed. Then, this middle-ware can schedule the repeated execution of the refresh queries without overloading the endpoint. In this paper, we study the problem of scheduling refresh queries for a large number of registered queries by assuming an overload-avoiding upper bound on the length of a regular time slot available for testing refresh queries. We investigate a variety of scheduling strategies and compare them experimentally in terms of time slots needed before they recognize changes and number of changes that they miss.Funding agencies: Funded by grants from the German Government, Federal Ministry of Education and Research for the project D-Werft (03WKCJ4D).</p