11,744 research outputs found
A Linked Data Approach to Sharing Workflows and Workflow Results
A bioinformatics analysis pipeline is often highly elaborate, due to the inherent complexity of biological systems and the variety and size of datasets. A digital equivalent of the âMaterials and Methodsâ section in wet laboratory publications would be highly beneficial to bioinformatics, for evaluating evidence and examining data across related experiments, while introducing the potential to find associated resources and integrate them as data and services. We present initial steps towards preserving bioinformatics âmaterials and methodsâ by exploiting the workflow paradigm for capturing the design of a data analysis pipeline, and RDF to link the workflow, its component services, run-time provenance, and a personalized biological interpretation of the results. An example shows the reproduction of the unique graph of an analysis procedure, its results, provenance, and personal interpretation of a text mining experiment. It links data from Taverna, myExperiment.org, BioCatalogue.org, and ConceptWiki.org. The approach is relatively âlight-weightâ and unobtrusive to bioinformatics users
Recommended from our members
A framework for feeding Linked Data to Complex Event Processing engines
A huge volume of Linked Data has been published on the Web, yet is not processable by Complex Event Processing (CEP) or Event Stream Processing (ESP) engines. This paper presents a frame-work to bridge this gap, under which Linked Data are first translated into events conforming to a lightweight ontology, and then fed to CEP engines. The event processing results will also be published back onto the Web of Data. In this way, CEP engines are connected to the Web of Data, and the ontological reasoning is integrated with event processing. Finally, the implementation method and a case study of the framework are presented
Drag it together with Groupie: making RDF data authoring easy and fun for anyone
One of the foremost challenges towards realizing a âRead-write Web of Dataâ [3] is making it possible for everyday computer users to easily find, manipulate, create, and publish data back to the Web so that it can be made available for others to use. However, many aspects of Linked Data make authoring and manipulation difficult for ânormalâ (ie non-coder) end-users. First, data can be high-dimensional, having arbitrary many properties per âinstanceâ, and interlinked to arbitrary many other instances in a many different ways. Second, collections of Linked Data tend to be vastly more heterogeneous than in typical structured databases, where instances are kept in uniform collections (e.g., database tables). Third, while highly flexible, the problem of having all structures reduced as a graph is verbosity: even simple structures can appear complex. Finally, many of the concepts involved in linked data authoring - for example, terms used to define ontologies are highly abstract and foreign to regular citizen-users.To counter this complexity we have devised a drag-and-drop direct manipulation interface that makes authoring Linked Data easy, fun, and accessible to a wide audience. Groupie allows users to author data simply by dragging blobs representing entities into other entities to compose relationships, establishing one relational link at a time. Since the underlying representation is RDF, Groupie facilitates the inclusion of references to entities and properties defined elsewhere on the Web through integration with popular Linked Data indexing services. Finally, to make it easy for new users to build upon othersâ work, Groupie provides a communal space where all data sets created by users can be shared, cloned and modified, allowing individual users to help each other model complex domains thereby leveraging collective intelligence
Semantic Sort: A Supervised Approach to Personalized Semantic Relatedness
We propose and study a novel supervised approach to learning statistical
semantic relatedness models from subjectively annotated training examples. The
proposed semantic model consists of parameterized co-occurrence statistics
associated with textual units of a large background knowledge corpus. We
present an efficient algorithm for learning such semantic models from a
training sample of relatedness preferences. Our method is corpus independent
and can essentially rely on any sufficiently large (unstructured) collection of
coherent texts. Moreover, the approach facilitates the fitting of semantic
models for specific users or groups of users. We present the results of
extensive range of experiments from small to large scale, indicating that the
proposed method is effective and competitive with the state-of-the-art.Comment: 37 pages, 8 figures A short version of this paper was already
published at ECML/PKDD 201
BlogForever D2.4: Weblog spider prototype and associated methodology
The purpose of this document is to present the evaluation of different solutions for capturing blogs, established methodology and to describe the developed blog spider prototype
OntoMaven: Maven-based Ontology Development and Management of Distributed Ontology Repositories
In collaborative agile ontology development projects support for modular
reuse of ontologies from large existing remote repositories, ontology project
life cycle management, and transitive dependency management are important
needs. The Apache Maven approach has proven its success in distributed
collaborative Software Engineering by its widespread adoption. The contribution
of this paper is a new design artifact called OntoMaven. OntoMaven adopts the
Maven-based development methodology and adapts its concepts to knowledge
engineering for Maven-based ontology development and management of ontology
artifacts in distributed ontology repositories.Comment: Pre-print submission to 9th International Workshop on Semantic Web
Enabled Software Engineering (SWESE2013). Berlin, Germany, December 2-5, 201
Recommended from our members
A linked data compliant framework for dynamic and web-scale consumption of web services
The While Semantic Web Services (SWS) research aims at automating Web service tasks such as discovery, orchestration and execution, its take-up is very limited so far. This is due to several reasons, such as inherent complexity of existing SWS frameworks and the considerable costs involved in creating correct SWS descriptions. In addition, while semantics are in use to enable tasks such as discovery, interaction between service consumers, providers and brokering environments is still not supported by semantic message descriptions. On the other hand, the Linked Data approach has produced a set of established principles for sharing and describing data, such as RDF as representation language and the integral use of dereferencable URIs. In this paper we propose to apply those principles to expose Web services and Web APIs and introduce a framework in which service registries as well as services contribute to the automation of service discovery, and hence, workload is distributed more efficiently. This is achieved by developing a Linked Data compliant Web services framework with that communicate with semi-centralised registries but compute their suitability for a given request themselves. All communications among different framework components are using RDF-based message protocols including service input and output. This framework aims at optimizing load balance and performance by dynamically assembling services at run time in a massively distributed Web environment
- âŚ