1,333 research outputs found

    Exploring Semantic Interoperability in e-Government Interoperability Frameworks for intra-African collaboration: A Systematic Literature Review

    Get PDF
    While many African countries have called for ICT based intra-African collaboration, services, and trade, it is not known whether this call is technically feasible. For such intra-African based collaboration, semantic interoperability would be required between the national e-government systems. This paper reviewed the e-government interoperability frameworks (e-GIFs) of English and Arabic speaking African countries to identify the evidence and conflict approaches to semantic interoperability. The results suggest that only seven African countries have e-GIFs, which have mainly been adopted from the UK\u27s e-Government Metadata Standards (eGMS) and on Dublin\u27s Core metadata (DC). However, many of the e-GIFs, with the exception of Nigeria, have not been contextualized to the local needs. The paper, therefore, concluded that more effort needs to be placed in developing e-GIFs in Africa, with particular emphasis on semantic interoperability, if the dream of intra-African collaboration is to be achieved

    Semantic Inference on Clinical Documents: Combining Machine Learning Algorithms With an Inference Engine for Effective Clinical Diagnosis and Treatment

    Get PDF
    Clinical practice calls for reliable diagnosis and optimized treatment. However, human errors in health care remain a severe issue even in industrialized countries. The application of clinical decision support systems (CDSS) casts light on this problem. However, given the great improvement in CDSS over the past several years, challenges to their wide-scale application are still present, including: 1) decision making of CDSS is complicated by the complexity of the data regarding human physiology and pathology, which could render the whole process more time-consuming by loading big data related to patients; and 2) information incompatibility among different health information systems (HIS) makes CDSS an information island, i.e., additional input work on patient information might be required, which would further increase the burden on clinicians. One popular strategy is the integration of CDSS in HIS to directly read electronic health records (EHRs) for analysis. However, gathering data from EHRs could constitute another problem, because EHR document standards are not unified. In addition, HIS could use different default clinical terminologies to define input data, which could cause additional misinterpretation. Several proposals have been published thus far to allow CDSS access to EHRs via the redefinition of data terminologies according to the standards used by the recipients of the data flow, but they mostly aim at specific versions of CDSS guidelines. This paper views these problems in a different way. Compared with conventional approaches, we suggest more fundamental changes; specifically, uniform and updatable clinical terminology and document syntax should be used by EHRs, HIS, and their integrated CDSS. Facilitated data exchange will increase the overall data loading efficacy, enabling CDSS to read more information for analysis at a given time. Furthermore, a proposed CDSS should be based on self-learning, which dynamically updates a knowledge model according to the data-stream-based upcoming data set. The experiment results show that our system increases the accuracy of the diagnosis and treatment strategy designs

    Automated Development of Semantic Data Models Using Scientific Publications

    Get PDF
    The traditional methods for analyzing information in digital documents have evolved with the ever-increasing volume of data. Some challenges in analyzing scientific publications include the lack of a unified vocabulary and a defined context, different standards and formats in presenting information, various types of data, and diverse areas of knowledge. These challenges hinder detecting, understanding, comparing, sharing, and querying information rapidly. I design a dynamic conceptual data model with common elements in publications from any domain, such as context, metadata, and tables. To enhance the models, I use related definitions contained in ontologies and the Internet. Therefore, this dissertation generates semantically-enriched data models from digital publications based on the Semantic Web principles, which allow people and computers to work cooperatively. Finally, this work uses a vocabulary and ontologies to generate a structured characterization and organize the data models. This organization allows integration, sharing, management, and comparing and contrasting information from publications

    A Multidisciplinary Approach to the Reuse of Open Learning Resources

    Get PDF
    Educational standards are having a significant impact on e-Learning. They allow for better exchange of information among different organizations and institutions. They simplify reusing and repurposing learning materials. They give teachers the possibility of personalizing them according to the student’s background and learning speed. Thanks to these standards, off-the-shelf content can be adapted to a particular student cohort’s context and learning needs. The same course content can be presented in different languages. Overall, all the parties involved in the learning-teaching process (students, teachers and institutions) can benefit from these standards and so online education can be improved. To materialize the benefits of standards, learning resources should be structured according to these standards. Unfortunately, there is the problem that a large number of existing e-Learning materials lack the intrinsic logical structure required, and further, when they have the structure, they are not encoded as required. These problems make it virtually impossible to share these materials. This thesis addresses the following research question: How to make the best use of existing open learning resources available on the Internet by taking advantage of educational standards and specifications and thus improving content reusability?In order to answer this question, I combine different technologies, techniques and standards that make the sharing of publicly available learning resources possible in innovative ways. I developed and implemented a three-stage tool to tackle the above problem. By applying information extraction techniques and open e-Learning standards to legacy learning resources the tool has proven to improve content reusability. In so doing, it contributes to the understanding of how these technologies can be used in real scenarios and shows how online education can benefit from them. In particular, three main components were created which enable the conversion process from unstructured educational content into a standard compliant form in a systematic and automatic way. An increasing number of repositories with educational resources are available, including Wikiversity and the Massachusetts Institute of Technology OpenCourseware. Wikivesity is an open repository containing over 6,000 learning resources in several disciplines and for all age groups [1]. I used the OpenCourseWare repository to evaluate the effectiveness of my software components and ideas. The results show that it is possible to create standard compliant learning objects from the publicly available web pages, improving their searchability, interoperability and reusability

    Grids and the Virtual Observatory

    Get PDF
    We consider several projects from astronomy that benefit from the Grid paradigm and associated technology, many of which involve either massive datasets or the federation of multiple datasets. We cover image computation (mosaicking, multi-wavelength images, and synoptic surveys); database computation (representation through XML, data mining, and visualization); and semantic interoperability (publishing, ontologies, directories, and service descriptions)

    Information Systems and Healthcare XVII: A HL7v3-based Mediating Schema Approach to Data Transfer between Heterogeneous Health Care Systems

    Get PDF
    One of the main challenges of exchanging patient care records between heterogeneous systems is the difficulty in overcoming semantic differences between them. This is further exacerbated by the lack of standardization in messaging protocols. As a solution to this problem, multiple ideas and standards have been proposed for exchanging clinical and administrative data in the healthcare area. However, most of these methods place some restrictions on the platform, standard or format, of the data. This paper proposes a context-specific, mediating schema-based architecture that enhances the transfer of electronic patient care records between healthcare information systems by using a reusable and portable model. The main contribution of this approach is its adaptability to a variety of schemas for the source and target systems

    Semantic Content Mediation and Acquisition: The Challenge for Semantic e-Business Solutions

    Full text link
    A Top Quadrant report situates the Semantic Web within the current Innovation Wave of “Distributed Intelligence”. This is one of the main innovation waves of the last centuries including textile, railway, auto, computer, distributed intelligence (1997-2061) and nanotechnology (2007-2081). The Distributed Intelligence wave started in the late nineties and is expected to peak between 2010 and 2020. The report estimates first return on investments in 2006-7, growing to a market of $40-60 billion in 2010. Funds are coming primary from governments, venture capitalists and industry commercialization. Over the next few years, this is expected to change in favour of industry commercialization

    Methodology for enterprise interoperability assessment

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de ComputadoresWith the evolution of modern enterprises and the increasing market competitiveness, the creation of ecosystems with large amounts of data and knowledge generally needing to be exchanged electronically, is arising. However, this enterprise inter and intra-connectivity is suffering from interoperability issues. Not visible when it is effective, the lack of interoperability poses a series of challenging problems to the industrial community, which can reduce the envisaged efficiency and increase costs. Those problems are mostly caused by misinterpretations of data at the systems level, but problems at the organizational and human levels may pose equivalent difficulties. Existing research and technology provides several frameworks to assist the development of collaborative environments and enterprise networks with well-defined methods to facilitate interoperability. Nonetheless, the interoperability process is not guaranteed and is not easily sustainable, changing upon frequent market and requirement variations. For these reasons, there is a need for a testing methodology to assess the capability of enterprises to cooperate at a certain point in time. This dissertation proposes a methodology to assess that capability, with a corresponding framework to evaluate the interoperability process, applying eliminatory tests to assess the structure of the organizations, the conceptual models and their implementation. This work contributes to increase the chances enterprises have of interoperating effectively, and enables the adoption of extraordinary measures to improve their current interoperability situation
    corecore