1,144 research outputs found

    Opportunities for Business Intelligence and Big Data Analytics in Evidence Based Medicine

    Get PDF
    Evidence based medicine (EBM) is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. Each year, a significant number of research studies (potentially serving as evidence) are reported in the literature at an ever-increasing rate outpacing the translation of research findings into practice. Coupled with the proliferation of electronic health records, and consumer health information, researchers and practitioners are challenged to leverage the full potential of EBM. In this paper we present a research agenda for leveraging business intelligence and big data analytics in evidence based medicine, and illustrate how analytics can be used to support EBM

    A Learning Health System for Radiation Oncology

    Get PDF
    The proposed research aims to address the challenges faced by clinical data science researchers in radiation oncology accessing, integrating, and analyzing heterogeneous data from various sources. The research presents a scalable intelligent infrastructure, called the Health Information Gateway and Exchange (HINGE), which captures and structures data from multiple sources into a knowledge base with semantically interlinked entities. This infrastructure enables researchers to mine novel associations and gather relevant knowledge for personalized clinical outcomes. The dissertation discusses the design framework and implementation of HINGE, which abstracts structured data from treatment planning systems, treatment management systems, and electronic health records. It utilizes disease-specific smart templates for capturing clinical information in a discrete manner. HINGE performs data extraction, aggregation, and quality and outcome assessment functions automatically, connecting seamlessly with local IT/medical infrastructure. Furthermore, the research presents a knowledge graph-based approach to map radiotherapy data to an ontology-based data repository using FAIR (Findable, Accessible, Interoperable, Reusable) concepts. This approach ensures that the data is easily discoverable and accessible for clinical decision support systems. The dissertation explores the ETL (Extract, Transform, Load) process, data model frameworks, ontologies, and provides a real-world clinical use case for this data mapping. To improve the efficiency of retrieving information from large clinical datasets, a search engine based on ontology-based keyword searching and synonym-based term matching tool was developed. The hierarchical nature of ontologies is leveraged to retrieve patient records based on parent and children classes. Additionally, patient similarity analysis is conducted using vector embedding models (Word2Vec, Doc2Vec, GloVe, and FastText) to identify similar patients based on text corpus creation methods. Results from the analysis using these models are presented. The implementation of a learning health system for predicting radiation pneumonitis following stereotactic body radiotherapy is also discussed. 3D convolutional neural networks (CNNs) are utilized with radiographic and dosimetric datasets to predict the likelihood of radiation pneumonitis. DenseNet-121 and ResNet-50 models are employed for this study, along with integrated gradient techniques to identify salient regions within the input 3D image dataset. The predictive performance of the 3D CNN models is evaluated based on clinical outcomes. Overall, the proposed Learning Health System provides a comprehensive solution for capturing, integrating, and analyzing heterogeneous data in a knowledge base. It offers researchers the ability to extract valuable insights and associations from diverse sources, ultimately leading to improved clinical outcomes. This work can serve as a model for implementing LHS in other medical specialties, advancing personalized and data-driven medicine

    Knowledge graph-based entity importance learning for multi-stream regression on Australian fuel price forecasting

    Full text link
    Ā© 2019 IEEE. A knowledge graph (KG) represents a collection of interlinked descriptions of entities. It has become a key focus for organising and utilising this type of data for applications. Many graph embedding techniques have been proposed to simplify the manipulation while preserving the inherent structure of the KG. However, scant attention has been given to the investigation of the importance of the entities (the nodes of KGs). In this paper, we propose a novel entities importance learning framework that investigates how to weight the entities and use them as a prior knowledge for solving multi-stream regression problems. The framework consists of KG feature extraction, multi-stream correlation analysis, and entity importance learning. To evaluate the proposed method, we implemented the framework based on Wikidata and applied it to Australian retail fuel price forecasting. The experiment results indicate that the proposed method reduces prediction error, which supports the weighted knowledge graph information as a means for improving machine learning model accuracy

    Study of dynamics of structured knowledge: Qualitative analysis of different mapping approaches

    Get PDF
    The authors compared three methods of mapping, considering the maps as a visual interface for the exploration of scientific articles related to computer science. Data were classified according to the original Computing Classification System (CCS) classification and co-categories were used for similarity metrics calculation. The authorsā€™ approach based on MDS was enriched by algorithm mapping to spherical topology. Three other methods were based on VOS, VxOrd and SOM mapping techniques. Visualization of the classified collection was done for three different decades. Tracking the changes in visualization patterns, the authors sought the method that would reveal the real evolution of the CCS scheme, which is still being updated by the editorial board. Comparative analysis is based on qualitative methods. Changes in those properties over two decades were evaluated for the benefit of the authorsā€™ method of mapping. The qualitative analysis shows clustering of proper categories and overlapping of other ones in the authorsā€™ approach, which corresponds to the current changes in the classification scheme and computer science literature

    The United States Marine Corps Data Collaboration Requirements: Retrieving and Integrating Data From Multiple Databases

    Get PDF
    The goal of this research is to develop an information sharing and database integration model and suggest a framework to fully satisfy the United States Marine Corps collaboration requirements as well as its information sharing and database integration needs. This research is exploratory; it focuses on only one initiative: the IT-21 initiative. The IT-21 initiative dictates The Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st Century Force. The IT-21 initiative states that Navy and Marine Corps information infrastructure will be based largely on commercial systems and services, and the Department of the Navy must ensure that these systems are seamlessly integrated and that information transported over the infrastructure is protected and secure. The Delphi Technique, a qualitative method approach, was used to develop a Holistic Model and to suggest a framework for information sharing and database integration. Data was primarily collected from mid-level to senior information officers, with a focus on Chief Information Officers. In addition, an extensive literature review was conducted to gain insight about known similarities and differences in Strategic Information Management, information sharing strategies, and database integration strategies. It is hoped that the Armed Forces and the Department of Defense will benefit from future development of the information sharing and database integration Holistic Model

    DPCat: Specification for an interoperable and machine-readable data processing catalogue based on GDPR

    Get PDF
    The GDPR requires Data Controllers and Data Protection Officers (DPO) to maintain a Register of Processing Activities (ROPA) as part of overseeing the organisationā€™s compliance processes. The ROPA must include information from heterogeneous sources such as (internal) departments with varying IT systems and (external) data processors. Current practices use spreadsheets or proprietary systems that lack machine-readability and interoperability, presenting barriers to automation. We propose the Data Processing Catalogue (DPCat) for the representation, collection and transfer of ROPA information, as catalogues in a machine-readable and interoperable manner. DPCat is based on the Data Catalog Vocabulary (DCAT) and its extension DCAT Application Profile for data portals in Europe (DCAT-AP), and the Data Privacy Vocabulary (DPV). It represents a comprehensive semantic model developed from GDPRā€™s Article and an analysis of the 17 ROPA templates from EU Data Protection Authorities (DPA). To demonstrate the practicality and feasibility of DPCat, we present the European Data Protection Supervisorā€™s (EDPS) ROPA documents using DPCat, verify them with SHACL to ensure the correctness of information based on legal and contextual requirements, and produce reports and ROPA documents based on DPA templates using SPARQL. DPCat supports a data governance process for data processing compliance to harmonise inputs from heterogeneous sources to produce dynamic documentation that can accommodate differences in regulatory approaches across DPAs and ease investigative burdens toward efficient enforcement

    Re-classification: some warnings and a proposal

    Get PDF

    Islamic Economy thru Online Community Implementing Information Retrieval Capabilities

    Get PDF
    Knowledge Management has become more of a concern by the community around the world. Communities are more aware and concern of the sharing and transfer the knowledge. The rapid development of Web technology has made the World Wide Web an important and popular application platform for disseminating and searching for information as well as conducting business. As a huge information source, World Wide Web has allowed unprecedented sharing of ideas and information on a scale never seen before. The use of the Web and its exponential growth are now well known, and they are causing a revolution in the way people use computers and perform daily tasks. Islamic Economy thru Online Community - Implementing Information Retrieval Capabilities is discussed more advanced way by using the technology to motivate and encourage community to the knowledge sharing and transfer. The purpose of the web is to develop a platform that community can improve the economic growth and make their knowledge more effectively and facilitating circulation of community knowledge and collects community members' opinions as well. The target users of this website are consumers and business person. In developing the project, the methodology comprises of system conceptualization, system analysis, system design, system development and lastly system testing. The tools used are Macromedia Dreamweaver MX 2004, Joomla Open Source, APACHE Web Server and mySQL. The website focuses on the economic growth for consumer and business person, so that community can make a right decision in such situation. It allows users to search the needed information on the specific area and allows users to store their information as well, so that the community can share their knowledge and experience with others
    • ā€¦
    corecore