15,199 research outputs found

    Investigation Interoperability Problems in Pharmacy Automation: A Case Study in Saudi Arabia

    Get PDF
    The aim of this case study is to investigate the nature of interoperability problems in hospital systems automation. One of the advanced healthcare providers in Saudi Arabia is the host of the study. The interaction between the pharmacy system and automated medication dispensing cabinets is the focus of the case system. The research method is a detailed case study where multiple data collection methods are used. The modelling of the processes of inpatient pharmacy systems is presented using Business Process Model Notation. The data collected is analysed to study the different interoperability problems. This paper presents a framework that classifies health informatics interoperability implementation problems into technical, semantic, organisational levels. The detailed study of the interoperability problems in this case illustrates the challenges to the adoption of health information system automation which could help other healthcare organisations in their system automation projects

    Linked education: interlinking educational resources and the web of data

    Get PDF
    Research on interoperability of technology-enhanced learning (TEL) repositories throughout the last decade has led to a fragmented landscape of competing approaches, such as metadata schemas and interface mechanisms. However, so far Web-scale integration of resources is not facilitated, mainly due to the lack of take-up of shared principles, datasets and schemas. On the other hand, the Linked Data approach has emerged as the de-facto standard for sharing data on the Web and offers a large potential to solve interoperability issues in the field of TEL. In this paper, we describe a general approach to exploit the wealth of already existing TEL data on the Web by allowing its exposure as Linked Data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain. This approach has been implemented in the context of the mEducator project where data from a number of open TEL data repositories has been integrated, exposed and enriched by following Linked Data principles

    Linked Data - the story so far

    No full text
    The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward

    Dynamic Model-based Management of Service-Oriented Infrastructure.

    Get PDF
    Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation

    Ontology of core data mining entities

    Get PDF
    In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines themost essential datamining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend

    An Analysis of Using Expert Systems and Intelligent Agents for the Virtual Library Project at the Naval Surface Warfare Center-Carderock Division

    Get PDF
    The Virtual Library Project1 at the Naval Surface Warfare Center/Carderock Division (NSWC/CD) is being developed to facilitate the incorporation and use of library documents via the Internet. These documents typically relate to the design and manufacture of ships for the U.S. Navy Fleet. As such, the libraries will store documents that contain not only text but also images, graphs and design configurations. Because of the dynamic nature of digital documents, particularly those related to design, rapid and effective cataloging of these documents becomes challenging. We conducted a research study to analyze the use of expert systems and intelligent agents to support the function of cataloging digital documents. This chapter provides an overview of past research in the use of expert systems and intelligent agents for cataloging digital documents and discusses our recommendations based on NSWC/CD’s requirements

    Language technologies and the evolution of the semantic web

    Get PDF
    The availability of huge amounts of semantic markup on the Web promises to enable a quantum leap in the level of support available to Web users for locating, aggregating, sharing, interpreting and customizing information. While we cannot claim that a large scale Semantic Web already exists, a number of applications have been produced, which generate and exploit semantic markup, to provide advanced search and querying functionalities, and to allow the visualization and management of heterogeneous, distributed data. While these tools provide evidence of the feasibility and tremendous potential value of the enterprise, they all suffer from major limitations, to do primarily with the limited degree of scale and heterogeneity of the semantic data they use. Nevertheless, we argue that we are at a key point in the brief history of the Semantic Web and that the very latest demonstrators already give us a glimpse of what future applications will look like. In this paper, we describe the already visible effects of these changes by analyzing the evolution of Semantic Web tools from smart databases towards applications that harness collective intelligence. We also point out that language technology plays an important role in making this evolution sustainable and we highlight the need for improved support, especially in the area of large-scale linguistic resources
    corecore