3,806 research outputs found

    A Proposal for Deploying Hybrid Knowledge Bases: the ADOxx-to-GraphDB Interoperability Case

    Get PDF
    Graph Database Management Systems brought data model abstractions closer to how humans are used to handle knowledge - i.e., driven by inferences across complex relationship networks rather than by encapsulating tuples under rigid schemata. Another discipline that commonly employs graph-like structures is diagrammatic Conceptual Modeling, where intuitive, graphical means of explicating knowledge are systematically studied and formalized. Considering the common ground of graph databases, the paper proposes an integration of OWL ontologies with diagrammatic representations as enabled by the ADOxx metamodeling platform. The proposal is based on the RDF-semantics variant of OWL and leads to a particular type of hybrid knowledge bases hosted, for proof-of-concept purposes, by the GraphDB system due to its inferencing capabilities. The approach aims for complementarity and integration, providing agile diagrammatic means of creating semantic networks that are amenable to ontology-based reasoning

    Building an Expert System for Evaluation of Commercial Cloud Services

    Full text link
    Commercial Cloud services have been increasingly supplied to customers in industry. To facilitate customers' decision makings like cost-benefit analysis or Cloud provider selection, evaluation of those Cloud services are becoming more and more crucial. However, compared with evaluation of traditional computing systems, more challenges will inevitably appear when evaluating rapidly-changing and user-uncontrollable commercial Cloud services. This paper proposes an expert system for Cloud evaluation that addresses emerging evaluation challenges in the context of Cloud Computing. Based on the knowledge and data accumulated by exploring the existing evaluation work, this expert system has been conceptually validated to be able to give suggestions and guidelines for implementing new evaluation experiments. As such, users can conveniently obtain evaluation experiences by using this expert system, which is essentially able to make existing efforts in Cloud services evaluation reusable and sustainable.Comment: 8 page, Proceedings of the 2012 International Conference on Cloud and Service Computing (CSC 2012), pp. 168-175, Shanghai, China, November 22-24, 201

    Proceedings of the Academic Track at State of the Map 2019 - Heidelberg (Germany), September 21-23, 2019

    Get PDF
    State of the Map featured a full day of academic talks. Building upon the motto of SotM 2019 in "Bridging the Map" the Academic Track session was aimed to provide the bridge to join together the experience, understanding, ideas, concepts and skills from different groups of researchers, academics and scientists from around the world. In particular, the Academic Track session was meant to build this bridge that connects members of the OpenStreetMap community and the academic community by providing an open passage for exchange of ideas, communication and opportunities for increased collaboration. These proceedings include 14 abstracts accepted as oral presentations and 6 abstracts presented as posters. Contributions were received from different academic fields, for example geography, remote sensing, computer and information sciences, geomatics, GIScience, the humanities and social sciences, and even from industry actors. We are particularly delighted to have included abstracts from both experienced researchers and students. Overall, it is our hope that these proceedings accurately showcase the ongoing innovation and maturity of scientific investigations and research into OpenStreetMap, showing how it as a research object converges multiple research areas together. Our aim is to show how the sum total of investigations of issues like Volunteered Geographic Information, geo-information, and geo-digital processes and representation shed light on the relations between crowds, real-world applications, technological developments, and scientific research

    Exploring the effectiveness of BIM for energy performance management of non-domestic buildings

    Get PDF
    Following several years of research and development around the subject of BIM, its impact on the design and handover of buildings is now becoming visible across the construction industry. Changes in design procedures and information management methods indicate the potential for greater utilisation of a Common Data Environment in areas other than design. To identify how these changes are influencing the engineering design process, and adapt this process to the needs and requirements of building performance management requires consideration of multiple factors, relating mainly to the stakeholders and processes employed in these procedures. This thesis is the culmination of a four year Engineering Doctorate exploring how BIM could be used to support non-domestic building energy performance management. It begins with an introduction to the research aim and objectives, then presents a thorough review of the subject area and the methodologies employed for the research. Research is split between eight sequential tasks using literature review, interviews, data analysis and case-study application from which findings, conclusions and key recommendations are made. Findings demonstrate disparity between different information environments and provide insight into the necessary steps to enable connection between BIM and monitored building energy performance information. They highlight the following factors essential to providing an information environment suitable for BIM applied performance management: Skills in handling information and the interface between various environments; Technology capable of producing structured and accurate information, supporting efficient access for interconnection with other environments; and Processes that define the standards to which information is classified, stored and modified, with responsibility for its creation and modification made clear throughout the building life-cycle. A prototype method for the linking of BIM and monitored building energy performance data is demonstrated for a case-study building, encountering many of the technical barriers preventing replication on other projects. Methodological challenges are identified using review of existing building design and operation procedures. In conclusion the research found that BIM is still in its infancy, and while efforts are being made to apply it in novel ways to support efficient operation, several challenges remain. Opportunities for building energy performance improvement may be visualised using the modelling environment BIM provides, and the ability to interface with descriptive performance data suggests the future potential for BIM utilisation post-handover

    SEEING THE UNSEEN: DELIVERING INTEGRATED UNDERGROUND UTILITY DATA IN THE UK

    Get PDF
    In earlier work we proposed a framework to integrate heterogeneous geospatial utility data in the UK. This paper provides an update on the techniques used to resolve semantic and schematic heterogeneities in the UK utility domain. Approaches for data delivery are discussed, including descriptions of three pilot projects and domain specific visualization issues are considered. A number of practical considerations are discussed that will impact on how any implementation architecture is derived from the integration framework. Considerations of stability, security, currency, operational impact and response time can reveal a number of conflicting constraints. The impacts of these constraints are discussed in respect of either a virtual or materialised delivery system. 1

    A Data Quality Framework for Process Mining of Electronic Health Record Data

    Get PDF
    Reliable research demands data of known quality. This can be very challenging for electronic health record (EHR) based research where data quality issues can be complex and often unknown. Emerging technologies such as process mining can reveal insights into how to improve care pathways but only if technological advances are matched by strategies and methods to improve data quality. The aim of this work was to develop a care pathway data quality framework (CP-DQF) to identify, manage and mitigate EHR data quality in the context of process mining, using dental EHRs as an example. Objectives: To: 1) Design a framework implementable within our e-health record research environments; 2) Scale it to further dimensions and sources; 3) Run code to mark the data; 4) Mitigate issues and provide an audit trail. Methods: We reviewed the existing literature covering data quality frameworks for process mining and for data mining of EHRs and constructed a unified data quality framework that met the requirements of both. We applied the framework to a practical case study mining primary care dental pathways from an EHR covering 41 dental clinics and 231,760 patients in the Republic of Ireland. Results: Applying the framework helped identify many potential data quality issues and mark-up every data point affected. This enabled systematic assessment of the data quality issues relevant to mining care pathways. Conclusion: The complexity of data quality in an EHR-data research environment was addressed through a re-usable and comprehensible framework that met the needs of our case study. This structured approach saved time and brought rigor to the management and mitigation of data quality issues. The resulting metadata is being used within cohort selection, experiment and process mining software so that our research with this data is based on data of known quality. Our framework is a useful starting point for process mining researchers to address EHR data quality concerns

    ArrayBridge: Interweaving declarative array processing with high-performance computing

    Full text link
    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.Comment: 12 pages, 13 figure

    RESTful Web Services Development with a Model-Driven Engineering Approach

    Get PDF
    A RESTful web service implementation requires following the constrains inherent to Representational State Transfer (REST) architectural style, which, being a non-trivial task, often leads to solutions that do not fulfill those requirements properly. Model-driven techniques have been proposed to improve the development of complex applications. In model-driven software development, software is not implemented manually based on informal descriptions, but partial or completely generated from formal models derived from metamodels. A model driven approach, materialized in a domain specific language that integrates the OpenAPI specification, an emerging standard for describing REST services, allows developers to use a design first approach in the web service development process, focusing in the definition of resources and their relationships, leaving the repetitive code production process to the automation provided by model-driven engineering techniques. This also allows to shift the creative coding process to the resolution of the complex business rules, instead of the tiresome and error-prone create, read, update, and delete operations. The code generation process covers the web service flow, from the establishment and exposure of the endpoints to the definition of database tables.A implementação de serviços web RESTful requer que as restrições inerentes ao estilo arquitetónico “Representational State Transfer” (REST) sejam cumpridas, o que, sendo usualmente uma tarefa não trivial, geralmente leva a soluções que não atendem a esses requisitos adequadamente. Técnicas orientadas a modelos têm sido propostas para melhorar o desenvolvimento de aplicações complexas. No desenvolvimento de software orientado a modelos, o software não é implementado manualmente com base em descrições informais, mas parcial ou completamente gerado a partir de modelos formais derivados de meta-modelos. Uma abordagem orientada a modelos, materializada através de uma linguagem específica do domínio que integra a especificação OpenAPI, um padrão emergente para descrever serviços REST, permite aos desenvolvedores usar uma primeira abordagem de design no processo de desenvolvimento de serviços da Web, concentrando-se na definição dos recursos e das suas relações, deixando o processo de produção de código repetitivo para a automação fornecida por técnicas de engenharia orientadas a modelos. Isso também permite focar o processo de codificação criativo na resolução e implementação das regras de negócios mais complexas, em vez de nas operações mais repetitivas e propensas a erros: criação, leitura, atualização e remoção de dados. O processo de geração de código abrange o fluxo do serviço web desde o estabelecimento e exposição dos caminhos para os serviços disponíveis até à definição de tabelas de base de dados
    corecore