149 research outputs found
Integrating heterogeneous web service styles with flexible semantic web services groundings
Semantic web services are touted as a means to integrate web services inside and outside the enterprise, but while current semantic web service frameworksâ including OWL-S [1], SA-WSDL, and WSMO 1 [2]âassume a homogeneous ecosystem of SOAP services and XML serialisations, growing numbers of real services are implemented using XML-RPC and RESTful interfaces, and non-XML serialisations like JSON. 2 Semantic services platforms based on OWL-S and WSMO use XML mapping languages to translate between an XML serialisation of the ontology data and the on-the-wire messages exchanged with the web service, a process referred to as grounding. This XML mapping approach suffers from two problems: it cannot address the growing number of non-SOAP, non-XML services being deployed on the Web, and it requires the modeller creating the semantic web service descriptions to work with the serialisation of the service ontology and a syntactic mapping language, in addition to the knowledge representation language used for representing the semantic service ontologies and descriptions. Our approach draws the serviceâs interface into the ontology: we defin
Recommended from our members
Blending the physical and the digital through conceptual spaces
The rise of the Internet facilitates an ever increasing growth of virtual, i.e. digital spaces which co-exist with the physical environment, i.e. the physical space. In that, the question arises, how physical and digital space can interact synchronously. While sensors provide a means to continuously observe the physical space, several issues arise with respect to mapping sensor data streams to digital spaces, for instance, structured linked data, formally represented through symbolic Semantic Web (SW) standards such as OWL or RDF. The challenge is to bridge between symbolic knowledge representations and the measured data collected by sensors. In particular, one needs to map a given set of arbitrary sensor data to a particular set of symbolic knowledge representations, e.g. ontology instances. This task is particularly challenging due to the vast variety of possible sensor measurements. Conceptual Spaces (CS) provide a means to represent knowledge in geometrical vector spaces in order to enable computation of similarities between knowledge entities by means of distance metrics. We propose an approach which allows to refine symbolic concepts as CS and to ground ontology instances to so-called prototypical members which are vectors in the CS. By computing similarities in terms of spatial distances between a given set of sensor measurements and a finite set of CS members, the most similar instance can be identified. In that, we provide a means to bridge between the physical space, as observed by sensors, and the digital space made up of symbolic representations
Recommended from our members
Ontological Foundations for Scholarly Debate Mapping Technology
Mapping scholarly debates is an important genre of what can be called Knowledge Domain Analytics (KDA) technology â i.e. technology which combines both quantitative and qualitative methods of analysing specialist knowledge domains. However, current KDA technology research has emerged from diverse traditions and thus lacks a common conceptual foundation. This paper reports on the design of a KDA ontology that aims to provide this foundation. The paper then describes the argumentation extensions to the ontology for supporting scholarly debate mapping as a special form of KDA and demonstrates its expressive capabilities using a case study debate
Recommended from our members
Modelling Scholarly Debate: Conceptual Foundations for Knowledge Domain Analysis Technology
Knowledge Domain Analysis (KDA) research investigates computational support for users who desire to understand and/or participate in the scholarly inquiry of a given academic knowledge domain. KDA technology supports this task by allowing users to identify important features of the knowledge domain such as the predominant research topics, the experts in the domain, and the most influential researchers. This thesis develops the conceptual foundations to integrate two identifiable strands of KDA research: Library and Information Science (LIS), which commits to a citation-based Bibliometrics paradigm, and Knowledge Engineering (KE), which adopts an ontology-based Conceptual Modelling paradigm. A key limitation of work to date is its inability to provide machine-readable models of the debate in academic knowledge domains. This thesis argues that KDA tools should support users in understanding the features of scholarly debate as a prerequisite for engaging with their chosen domain.
To this end, the thesis proposes a Scholarly Debate Ontology which specifies the formal vocabulary for constructing representations of debate in academic knowledge domains. The thesis also proposes an analytical approach that is used to automatically detect clusters of viewpoints as particularly important features of scholarly debate. This approach combines aspects of both the Conceptual Modelling and Bibliometrics paradigms. That is, the method combines an ontological focus on semantics and a graph-theoretical focus on structure in order to identify and reveal new insights about viewpoint-clusters in a given knowledge domain. This combined ontological and graph-theoretical approach is demonstrated and evaluated by modelling and analysing debates in two domains. The thesis reflects on the strengths and limitations of this approach, and considers the directions which this work opens up for future research into KDA technology
Recommended from our members
Two-fold Semantic Web service matchmaking â applying ontology mapping for service discovery
Semantic Web Services (SWS) aim at the automated discovery and orchestration of Web services on the basis of comprehensive, machine-interpretable semantic descriptions. Since SWS annotations usually are created by distinct SWS providers, semantic-level mediation, i.e. mediation between concurrent semantic representations, is a key requirement for SWS discovery. Since semantic-level mediation aims at enabling interoperability across heterogeneous semantic representations, it can be perceived as a particular instantiation of the ontology mapping problem. While recent SWS matchmakers usually rely on manual alignments or subscription to a common ontology, we propose a two-fold SWS matchmaking approach, consisting of (a) a general-purpose semantic-level mediator and (b) comparison and matchmaking of SWS capabilities. Our semantic-level mediation approach enables the implicit representation of similarities across distinct SWS by grounding service descriptions in so-called Mediation Spaces (MS). Given a set of SWS and their respective grounding, a SWS matchmaker automatically computes instance similarities across distinct SWS ontologies and matches the request to the most suitable SWS. A prototypical application illustrates our approach
Recommended from our members
Towards two-stage service representation and reasoning: from lightweight annotations to comprehensive semantics
Semantics are used to mark up a wide variety of data-centric Web resources but are not used to annotate online functionality in significant numbers. That is despite considerable research dedicated to Semantic Web Services (SWS). This has led to the emergence of a new Linked Services approach with simplified and less costly to produce service models, which targets a wider audience and allows even non-SWS developers to annotate services. However, such models merely aim at enabling semantic search by humans or automated service clustering rather than automation of service tasks such as discovery or orchestration. Thus, more expressive solutions are still required to achieve automated discovery and orchestration of services. In this paper, we describe our investigation into combining the strengths of two distinct approaches to modeling semantic Web services â 'lightweight' Linked Services and 'heavyweight' SWS automation - into a coherent SWS framework. In our vision, such integration is achieved by means of model cross-referencing and model transformation and augmentation
Autonomous Matchmaking Web Services
Current Semantic Web Services research investigates how to dynamically discover assemble and invoke Web services. Despite many research efforts, Semantic Web Services are still not fully recognized in industry. One important reason is the dissevered description layers of syntax and semantics. In other words, semantics is only useful for a service broker to discover services whereas service requesters still need to invoke services based on syntactic descriptions. In this paper, we view semantics from another angle to reform the Web service framework completely (even for input messages and output messages during invocation) by using only RDF and Linked Open Data. We introduce Autonomous Matchmaking Web Services in which Web services are brokering themselves to notify the service registry whether they are suitable to the requesters. This framework is designated to more efficiently work for dynamically assembling services at run time in a massively distributed environment
Two-staged approach for semantically annotating and brokering TV-related services
Nowadays, more and more distributed digital TV and TV-related resources are published on the Web, such as Electronic Personal TV Guide (EPG) data. To enable applications to access these resources easily, the TV resource data is commonly provided by Web service technologies. The huge variety of data related to the TV domain and the wide range of services that provide it, raises the need to have a broker to discover, select and orchestrate services to satisfy the runtime requirements of applications that invoke these services. The variety of data and heterogeneous nature of the service capabilities makes it a challenging domain for automated web-service discovery and composition. To overcome these issues, we propose a two-stage service annotation approach, which is resolved by integrating Linked Services and IRS-III semantic web services framework, to complete the lifecycle of service annotating, publishing, deploying, discovering, orchestration and dynamic invocation. This approach satisfies both developer's and application's requirements to use Semantic Web Services (SWS) technologies manually and automatically
Comprehensive service semantics and light-weight Linked Services: towards an integrated approach
Semantics are used to mark up a wide variety of data-centric Web resources but, are not used in significant numbers to annotate online services â that is despite considerable research dedicated to Semantic Web Services (SWS). This is partially due to the complexity of comprehensive SWS models aiming at automation of service-oriented tasks such as discovery, composition, and execution. This has led to the emergence of a new approach dubbed Linked Services which is based on simplified service models that are easier to populate and interpret and accessible even to non-experts. However, such Minimal Service Models so far do not cover all execution-related aspects of service automation and merely aim at enabling more comprehensive service search and clustering. Thus, in this paper, we describe our approach of combining the strengths of both distinct approaches to modeling Semantic Web Services â âlightweightâ Linked Services and âheavyweightâ SWS automation â into a coherent SWS framework. In addition, an implementation of our approach based on existing SWS tools together with a proof-of-concept prototype used within the EU project NoTube is presented
Would raising the total cholesterol diagnostic cut-off from 7.5Â mmol/L to 9.3Â mmol/L improve detection rate of patients with monogenic familial hypercholesterolaemia?
A previous report suggested that 88% of individuals in the general population with total cholesterol (TC)>9.3mmol/L have familial hypercholesterolaemia (FH). We tested this hypothesis in a cohort of 4896 UK civil servants, mean (SD) age 44 (±6) years, using next generation sequencing to achieve a comprehensive genetic diagnosis. 25 (0.5%) participants (mean age 49.2 years) had baseline TC>9.3mmol/L, and overall we found an FH-causing mutation in the LDLR gene in seven (28%) subjects. The detection rate increased to 39% by excluding eight participants with triglyceride levels over 2.3mmol/L, and reached 75% in those with TC>10.4mmol/L. By extrapolation, the detection rate would be ~25% by including all participants with TC>8.6mmol/L (2.5 standard deviations from the mean). Based on the 1/500 FH frequency, 30% of all FH-cases in this cohort would be missed using the 9.3mmol/L cut-off. Given that an overall detection rate of 25% is considered economically acceptable, these data suggest that a diagnostic TC cut-off of 8.6mmol/L, rather than 9.3mmol/L would be clinically useful for FH in the general population
- âŠ