1,262 research outputs found

    Transitioning Applications to Semantic Web Services: An Automated Formal Approach

    No full text
    Semantic Web Services have been recognized as a promising technology that exhibits huge commercial potential, and attract significant attention from both industry and the research community. Despite expectations being high, the industrial take-up of Semantic Web Service technologies has been slower than expected. One of the main reasons is that many systems have been developed without considering the potential of the web in integrating services and sharing resources. Without a systematic methodology and proper tool support, the migration from legacy systems to Semantic Web Service-based systems can be a very tedious and expensive process, which carries a definite risk of failure. There is an urgent need to provide strategies which allow the migration of legacy systems to Semantic Web Services platforms, and also tools to support such a strategy. In this paper we propose a methodology for transitioning these applications to Semantic Web Services by taking the advantage of rigorous mathematical methods. Our methodology allows users to migrate their applications to Semantic Web Services platform automatically or semi-automatically

    XQOWL: An Extension of XQuery for OWL Querying and Reasoning

    Full text link
    One of the main aims of the so-called Web of Data is to be able to handle heterogeneous resources where data can be expressed in either XML or RDF. The design of programming languages able to handle both XML and RDF data is a key target in this context. In this paper we present a framework called XQOWL that makes possible to handle XML and RDF/OWL data with XQuery. XQOWL can be considered as an extension of the XQuery language that connects XQuery with SPARQL and OWL reasoners. XQOWL embeds SPARQL queries (via Jena SPARQL engine) in XQuery and enables to make calls to OWL reasoners (HermiT, Pellet and FaCT++) from XQuery. It permits to combine queries against XML and RDF/OWL resources as well as to reason with RDF/OWL data. Therefore input data can be either XML or RDF/OWL and output data can be formatted in XML (also using RDF/OWL XML serialization).Comment: In Proceedings PROLE 2014, arXiv:1501.0169

    Validation Framework for RDF-based Constraint Languages

    Get PDF
    In this thesis, a validation framework is introduced that enables to consistently execute RDF-based constraint languages on RDF data and to formulate constraints of any type. The framework reduces the representation of constraints to the absolute minimum, is based on formal logics, consists of a small lightweight vocabulary, and ensures consistency regarding validation results and enables constraint transformations for each constraint type across RDF-based constraint languages

    Creation and extension of ontologies for describing communications in the context of organizations

    Get PDF
    Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer ScienceThe use of ontologies is nowadays a sufficiently mature and solid field of work to be considered an efficient alternative in knowledge representation. With the crescent growth of the Semantic Web, it is expectable that this alternative tends to emerge even more in the near future. In the context of a collaboration established between FCT-UNL and the R&D department of a national software company, a new solution entitled ECC – Enterprise Communications Center was developed. This application provides a solution to manage the communications that enter, leave or are made within an organization, and includes intelligent classification of communications and conceptual search techniques in a communications repository. As specificity may be the key to obtain acceptable results with these processes, the use of ontologies becomes crucial to represent the existing knowledge about the specific domain of an organization. This work allowed us to guarantee a core set of ontologies that have the power of expressing the general context of the communications made in an organization, and of a methodology based upon a series of concrete steps that provides an effective capability of extending the ontologies to any business domain. By applying these steps, the minimization of the conceptualization and setup effort in new organizations and business domains is guaranteed. The adequacy of the core set of ontologies chosen and of the methodology specified is demonstrated in this thesis by its effective application to a real case-study, which allowed us to work with the different types of sources considered in the methodology and the activities that support its construction and evolution

    Using the ResearchEHR platform to facilitate the practical application of the EHR standards

    Full text link
    Possibly the most important requirement to support co-operative work among health professionals and institutions is the ability of sharing EHRs in a meaningful way, and it is widely acknowledged that standardization of data and concepts is a prerequisite to achieve semantic interoperability in any domain. Different international organizations are working on the definition of EHR architectures but the lack of tools that implement them hinders their broad adoption. In this paper we present ResearchEHR, a software platform whose objective is to facilitate the practical application of EHR standards as a way of reaching the desired semantic interoperability. This platform is not only suitable for developing new systems but also for increasing the standardization of existing ones. The work reported here describes how the platform allows for the edition, validation, and search of archetypes, converts legacy data into normalized, archetypes extracts, is able to generate applications from archetypes and finally, transforms archetypes and data extracts into other EHR standards. We also include in this paper how ResearchEHR has made possible the application of the CEN/ISO 13606 standard in a real environment and the lessons learnt with this experience. © 2011 Elsevier Inc..This work has been partially supported by the Spanish Ministry of Science and Innovation under Grants TIN2010-21388-C02-01 and TIN2010-21388-C02-02, and by the Health Institute Carlos in through the RETICS Combiomed, RD07/0067/2001. Our most sincere thanks to the Hospital of Fuenlabrada in Madrid, including its Medical Director Pablo Serrano together with Marta Terron and Luis Lechuga for their support and work during the development of the medications reconciliation project.Maldonado Segura, JA.; Martínez Costa, C.; Moner Cano, D.; Menárguez-Tortosa, M.; Boscá Tomás, D.; Miñarro Giménez, JA.; Fernández-Breis, JT.... (2012). Using the ResearchEHR platform to facilitate the practical application of the EHR standards. Journal of Biomedical Informatics. 45(4):746-762. doi:10.1016/j.jbi.2011.11.004S74676245

    Extensible metadata repository for information systems

    Get PDF
    Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer ScienceInformation Systems are, usually, systems that have a strong integration component and some of those systems rely on integration solutions that are based on metadata (data that describes data). In that situation, there’s a need to deal with metadata as if it were “normal”information. For that matter, the existence of a metadata repository that deals with the integrity, storage, validity and eases the processes of information integration in the information system is a wise choice. There are several metadata repositories available in the market, but none of them is prepared to deal with the needs of information systems or is generic enough to deal with the multitude of situations/domains of information and with the necessary integration features. In the SESS project (an European Space Agency project), a generic metadata repository was developed, based on XML technologies. This repository provided the tools for information integration, validity, storage, share, import, as well as system and data integration, but it required the use of fix syntactic rules that were stored in the content of the XML files. This situation causes severe problems when trying to import documents from external data sources (sources unaware of these syntactic rules). In this thesis a metadata repository that provided the same mechanisms of storage, integrity, validity, etc, but is specially focused on easy integration of metadata from any type of external source (in XML format) and provides an environment that simplifies the reuse of already existing types of metadata to build new types of metadata, all this without having to modify the documents it stores was developed. The repository stores XML documents (known as Instances), which are instances of a Concept, that Concept defines a XML structure that validates its Instances. To deal with reuse, a special unit named Fragment, which allows defining a XML structure (which can be created by composing other Fragments) that can be reused by Concepts when defining their own structure. Elements of the repository (Instances,Concepts and Fragment) have an identifier based on (and compatible with) URIs, named Metadata Repository Identifier (MRI). Those identifiers, as well as management information(including relations) are managed by the repository, without the need to use fix syntactic rules, easing integration. A set of tests using documents from the SESS project and from software-house ITDS was used to successfully validate the repository against the thesis objectives of easy integration and promotion of reuse

    Geospatial data harmonization from regional level to european level: a usa case in forest fire data

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Geospatial data harmonization is becoming more and more important to increase interoperability of heterogeneous data derived from various sources in spatial data infrastructures. To address this harmonization issue we present the current status of data availability among different communities, languages, and administrative scales from regional to national and European levels. With a use case in forest data models in Europe, interoperability of burned area data derived from Europe and Valencia Community in Spain were tested and analyzed on the syntactic, schematic and semantic level. We suggest approaches for achieving a higher chance of data interoperability to guide forest domain experts in forest fire analysis. For testing syntactic interoperability, a common platform in the context of formats and web services was examined. We found that establishing OGC standard web services in a combination with GIS software applications that support various formats and web services can increase the chance of achieving syntactic interoperability between multiple geospatial data derived from different sources. For testing schematic and semantic interoperability, the ontology-based schema mapping approach was taken to transform a regional data model to a European data model on the conceptual level. The Feature Manipulation Engine enabled various types of data transformation from source to target attributes to achieve schematic interoperability. Ontological modelling in Protégé helped identify a common concept between the source and target data models, especially in cases where matching attributes were not found at the schematic level. Establishment of the domain ontology was explored to reach common ground between application ontologies and achieve a higher level of semantic interoperability
    corecore