525 research outputs found

    An Interoperable Clinical Cardiology Electronic Health Record System - a standards based approach for Clinical Practice and Research with Data Reuse

    Get PDF
    Currently in hospitals, several information systems manage, very often autonomously, the patient’s personal, clinical and diagnostic data. This originates a clinical information management system consisting of a myriad of independent subsystems which, although efficient in their specific purpose, make the integration of the whole system very difficult and limit the use of clinical data, especially as regards the reuse of these data for research purposes. Mainly for these reasons, the management of the Genoese ASL3 decided to commission the University of Genoa to set up a medical record system that could be easily integrated with the rest of the information system already present, but which offered solid interoperability features, and which could support the research skills of hospital health workers. My PhD work aimed to develop an electronic health record system for a cardiology ward, obtaining a prototype which is functional and usable in a hospital ward. The choice of cardiology was due to the wide availability of the staff of the cardiology department to support me in the development and in the test phase. The resulting medical record system has been designed “ab initio” to be fully integrated into the hospital information system and to exchange data with the regional health information infrastructure. In order to achieve interoperability the system is based on the Health Level Seven standards for exchanging information between medical information systems. These standards are widely deployed and allow for the exchange of information in several functional domains. Specific decision support sections for particular aspects of the clinical life were also included. The data collected by this system were the basis for examples of secondary use for the development of two models based on machine learning algorithms. The first model allows to predict mortality in patients with heart failure within 6 months from their admission, and the second is focused on the discrimination between heart failure versus chronic ischemic heart disease in the elderly population, which is the widest population section served by the cardiological ward

    Continuous Process Auditing (CPA): an Audit Rule Ontology Approach to Compliance and Operational Audits

    Get PDF
    Continuous Auditing (CA) has been investigated over time and it is, somewhat, in practice within nancial and transactional auditing as a part of continuous assurance and monitoring. Enterprise Information Systems (EIS) that run their activities in the form of processes require continuous auditing of a process that invokes the action(s) speci ed in the policies and rules in a continuous manner and/or sometimes in real-time. This leads to the question: How much could continuous auditing mimic the actual auditing procedures performed by auditing professionals? We investigate some of these questions through Continuous Process Auditing (CPA) relying on heterogeneous activities of processes in the EIS, as well as detecting exceptions and evidence in current and historic databases to provide audit assurance

    Performance Evaluation of Smart Decision Support Systems on Healthcare

    Get PDF
    Medical activity requires responsibility not only from clinical knowledge and skill but also on the management of an enormous amount of information related to patient care. It is through proper treatment of information that experts can consistently build a healthy wellness policy. The primary objective for the development of decision support systems (DSSs) is to provide information to specialists when and where they are needed. These systems provide information, models, and data manipulation tools to help experts make better decisions in a variety of situations. Most of the challenges that smart DSSs face come from the great difficulty of dealing with large volumes of information, which is continuously generated by the most diverse types of devices and equipment, requiring high computational resources. This situation makes this type of system susceptible to not recovering information quickly for the decision making. As a result of this adversity, the information quality and the provision of an infrastructure capable of promoting the integration and articulation among different health information systems (HIS) become promising research topics in the field of electronic health (e-health) and that, for this same reason, are addressed in this research. The work described in this thesis is motivated by the need to propose novel approaches to deal with problems inherent to the acquisition, cleaning, integration, and aggregation of data obtained from different sources in e-health environments, as well as their analysis. To ensure the success of data integration and analysis in e-health environments, it is essential that machine-learning (ML) algorithms ensure system reliability. However, in this type of environment, it is not possible to guarantee a reliable scenario. This scenario makes intelligent SAD susceptible to predictive failures, which severely compromise overall system performance. On the other hand, systems can have their performance compromised due to the overload of information they can support. To solve some of these problems, this thesis presents several proposals and studies on the impact of ML algorithms in the monitoring and management of hypertensive disorders related to pregnancy of risk. The primary goals of the proposals presented in this thesis are to improve the overall performance of health information systems. In particular, ML-based methods are exploited to improve the prediction accuracy and optimize the use of monitoring device resources. It was demonstrated that the use of this type of strategy and methodology contributes to a significant increase in the performance of smart DSSs, not only concerning precision but also in the computational cost reduction used in the classification process. The observed results seek to contribute to the advance of state of the art in methods and strategies based on AI that aim to surpass some challenges that emerge from the integration and performance of the smart DSSs. With the use of algorithms based on AI, it is possible to quickly and automatically analyze a larger volume of complex data and focus on more accurate results, providing high-value predictions for a better decision making in real time and without human intervention.A atividade médica requer responsabilidade não apenas com base no conhecimento e na habilidade clínica, mas também na gestão de uma enorme quantidade de informações relacionadas ao atendimento ao paciente. É através do tratamento adequado das informações que os especialistas podem consistentemente construir uma política saudável de bem-estar. O principal objetivo para o desenvolvimento de sistemas de apoio à decisão (SAD) é fornecer informações aos especialistas onde e quando são necessárias. Esses sistemas fornecem informações, modelos e ferramentas de manipulação de dados para ajudar os especialistas a tomar melhores decisões em diversas situações. A maioria dos desafios que os SAD inteligentes enfrentam advêm da grande dificuldade de lidar com grandes volumes de dados, que é gerada constantemente pelos mais diversos tipos de dispositivos e equipamentos, exigindo elevados recursos computacionais. Essa situação torna este tipo de sistemas suscetível a não recuperar a informação rapidamente para a tomada de decisão. Como resultado dessa adversidade, a qualidade da informação e a provisão de uma infraestrutura capaz de promover a integração e a articulação entre diferentes sistemas de informação em saúde (SIS) tornam-se promissores tópicos de pesquisa no campo da saúde eletrônica (e-saúde) e que, por essa mesma razão, são abordadas nesta investigação. O trabalho descrito nesta tese é motivado pela necessidade de propor novas abordagens para lidar com os problemas inerentes à aquisição, limpeza, integração e agregação de dados obtidos de diferentes fontes em ambientes de e-saúde, bem como sua análise. Para garantir o sucesso da integração e análise de dados em ambientes e-saúde é importante que os algoritmos baseados em aprendizagem de máquina (AM) garantam a confiabilidade do sistema. No entanto, neste tipo de ambiente, não é possível garantir um cenário totalmente confiável. Esse cenário torna os SAD inteligentes suscetíveis à presença de falhas de predição que comprometem seriamente o desempenho geral do sistema. Por outro lado, os sistemas podem ter seu desempenho comprometido devido à sobrecarga de informações que podem suportar. Para tentar resolver alguns destes problemas, esta tese apresenta várias propostas e estudos sobre o impacto de algoritmos de AM na monitoria e gestão de transtornos hipertensivos relacionados com a gravidez (gestação) de risco. O objetivo das propostas apresentadas nesta tese é melhorar o desempenho global de sistemas de informação em saúde. Em particular, os métodos baseados em AM são explorados para melhorar a precisão da predição e otimizar o uso dos recursos dos dispositivos de monitorização. Ficou demonstrado que o uso deste tipo de estratégia e metodologia contribui para um aumento significativo do desempenho dos SAD inteligentes, não só em termos de precisão, mas também na diminuição do custo computacional utilizado no processo de classificação. Os resultados observados buscam contribuir para o avanço do estado da arte em métodos e estratégias baseadas em inteligência artificial que visam ultrapassar alguns desafios que advêm da integração e desempenho dos SAD inteligentes. Como o uso de algoritmos baseados em inteligência artificial é possível analisar de forma rápida e automática um volume maior de dados complexos e focar em resultados mais precisos, fornecendo previsões de alto valor para uma melhor tomada de decisão em tempo real e sem intervenção humana

    Semantic Inference on Clinical Documents: Combining Machine Learning Algorithms With an Inference Engine for Effective Clinical Diagnosis and Treatment

    Get PDF
    Clinical practice calls for reliable diagnosis and optimized treatment. However, human errors in health care remain a severe issue even in industrialized countries. The application of clinical decision support systems (CDSS) casts light on this problem. However, given the great improvement in CDSS over the past several years, challenges to their wide-scale application are still present, including: 1) decision making of CDSS is complicated by the complexity of the data regarding human physiology and pathology, which could render the whole process more time-consuming by loading big data related to patients; and 2) information incompatibility among different health information systems (HIS) makes CDSS an information island, i.e., additional input work on patient information might be required, which would further increase the burden on clinicians. One popular strategy is the integration of CDSS in HIS to directly read electronic health records (EHRs) for analysis. However, gathering data from EHRs could constitute another problem, because EHR document standards are not unified. In addition, HIS could use different default clinical terminologies to define input data, which could cause additional misinterpretation. Several proposals have been published thus far to allow CDSS access to EHRs via the redefinition of data terminologies according to the standards used by the recipients of the data flow, but they mostly aim at specific versions of CDSS guidelines. This paper views these problems in a different way. Compared with conventional approaches, we suggest more fundamental changes; specifically, uniform and updatable clinical terminology and document syntax should be used by EHRs, HIS, and their integrated CDSS. Facilitated data exchange will increase the overall data loading efficacy, enabling CDSS to read more information for analysis at a given time. Furthermore, a proposed CDSS should be based on self-learning, which dynamically updates a knowledge model according to the data-stream-based upcoming data set. The experiment results show that our system increases the accuracy of the diagnosis and treatment strategy designs

    Health systems data interoperability and implementation

    Get PDF
    Objective The objective of this study was to use machine learning and health standards to address the problem of clinical data interoperability across healthcare institutions. Addressing this problem has the potential to make clinical data comparable, searchable and exchangeable between healthcare providers. Data sources Structured and unstructured data has been used to conduct the experiments in this study. The data was collected from two disparate data sources namely MIMIC-III and NHanes. The MIMIC-III database stored data from two electronic health record systems which are CareVue and MetaVision. The data stored in these systems was not recorded with the same standards; therefore, it was not comparable because some values were conflicting, while one system would store an abbreviation of a clinical concept, the other would store the full concept name and some of the attributes contained missing information. These few issues that have been identified make this form of data a good candidate for this study. From the identified data sources, laboratory, physical examination, vital signs, and behavioural data were used for this study. Methods This research employed a CRISP-DM framework as a guideline for all the stages of data mining. Two sets of classification experiments were conducted, one for the classification of structured data, and the other for unstructured data. For the first experiment, Edit distance, TFIDF and JaroWinkler were used to calculate the similarity weights between two datasets, one coded with the LOINC terminology standard and another not coded. Similar sets of data were classified as matches while dissimilar sets were classified as non-matching. Then soundex indexing method was used to reduce the number of potential comparisons. Thereafter, three classification algorithms were trained and tested, and the performance of each was evaluated through the ROC curve. Alternatively the second experiment was aimed at extracting patient’s smoking status information from a clinical corpus. A sequence-oriented classification algorithm called CRF was used for learning related concepts from the given clinical corpus. Hence, word embedding, random indexing, and word shape features were used for understanding the meaning in the corpus. Results Having optimized all the model’s parameters through the v-fold cross validation on a sampled training set of structured data ( ), out of 24 features, only ( 8) were selected for a classification task. RapidMiner was used to train and test all the classification algorithms. On the final run of classification process, the last contenders were SVM and the decision tree classifier. SVM yielded an accuracy of 92.5% when the and parameters were set to and . These results were obtained after more relevant features were identified, having observed that the classifiers were biased on the initial data. On the other side, unstructured data was annotated via the UIMA Ruta scripting language, then trained through the CRFSuite which comes with the CLAMP toolkit. The CRF classifier obtained an F-measure of 94.8% for “nonsmoker” class, 83.0% for “currentsmoker”, and 65.7% for “pastsmoker”. It was observed that as more relevant data was added, the performance of the classifier improved. The results show that there is a need for the use of FHIR resources for exchanging clinical data between healthcare institutions. FHIR is free, it uses: profiles to extend coding standards; RESTFul API to exchange messages; and JSON, XML and turtle for representing messages. Data could be stored as JSON format on a NoSQL database such as CouchDB, which makes it available for further post extraction exploration. Conclusion This study has provided a method for learning a clinical coding standard by a computer algorithm, then applying that learned standard to unstandardized data so that unstandardized data could be easily exchangeable, comparable and searchable and ultimately achieve data interoperability. Even though this study was applied on a limited scale, in future, the study would explore the standardization of patient’s long-lived data from multiple sources using the SHARPn open-sourced tools and data scaling platformsInformation ScienceM. Sc. (Computing

    DEPLOYR: A technical framework for deploying custom real-time machine learning models into the electronic medical record

    Full text link
    Machine learning (ML) applications in healthcare are extensively researched, but successful translations to the bedside are scant. Healthcare institutions are establishing frameworks to govern and promote the implementation of accurate, actionable and reliable models that integrate with clinical workflow. Such governance frameworks require an accompanying technical framework to deploy models in a resource efficient manner. Here we present DEPLOYR, a technical framework for enabling real-time deployment and monitoring of researcher created clinical ML models into a widely used electronic medical record (EMR) system. We discuss core functionality and design decisions, including mechanisms to trigger inference based on actions within EMR software, modules that collect real-time data to make inferences, mechanisms that close-the-loop by displaying inferences back to end-users within their workflow, monitoring modules that track performance of deployed models over time, silent deployment capabilities, and mechanisms to prospectively evaluate a deployed model's impact. We demonstrate the use of DEPLOYR by silently deploying and prospectively evaluating twelve ML models triggered by clinician button-clicks in Stanford Health Care's production instance of Epic. Our study highlights the need and feasibility for such silent deployment, because prospectively measured performance varies from retrospective estimates. By describing DEPLOYR, we aim to inform ML deployment best practices and help bridge the model implementation gap

    The path to a better biomarker: Application of a risk management framework for the implementation of PD-L1 and TILs as immuno-oncology biomarkers in breast cancer clinical trials and daily practice

    Get PDF
    Immune checkpoint inhibitor therapies targeting PD-1/PD-L1 are now the standard of care in oncology across several hematologic and solid tumor types, including triple negative breast cancer (TNBC). Patients with metastatic or locally advanced TNBC with PD-L1 expression on immune cells occupying 651% of tumor area demonstrated survival benefit with the addition of atezolizumab to nab-paclitaxel. However, concerns regarding variability between immunohistochemical PD-L1 assay performance and inter-reader reproducibility have been raised. High tumor-infiltrating lymphocytes (TILs) have also been associated with response to PD-1/PD-L1 inhibitors in patients with breast cancer (BC). TILs can be easily assessed on hematoxylin and eosin\u2013stained slides and have shown reliable inter-reader reproducibility. As an established prognostic factor in early stage TNBC, TILs are soon anticipated to be reported in daily practice in many pathology laboratories worldwide. Because TILs and PD-L1 are parts of an immunological spectrum in BC, we propose the systematic implementation of combined PD-L1 and TIL analyses as a more comprehensive immuno-oncological biomarker for patient selection for PD-1/PD-L1 inhibition-based therapy in patients with BC. Although practical and regulatory considerations differ by jurisdiction, the pathology community has the responsibility to patients to implement assays that lead to optimal patient selection. We propose herewith a risk-management framework that may help mitigate the risks of suboptimal patient selection for immuno-therapeutic approaches in clinical trials and daily practice based on combined TILs/PD-L1 assessment in BC. \ua9 2020 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd

    Ideological Misalignment in the Discourse(s) of Higher Education: Comparing University Mission Statements with Texts from Commercial Learning Analytics Providers

    Get PDF
    This study analyzes, interprets, and compares texts from different educational discourses. Using the Critical Discourse Analysis method, I reveal how texts from university mission statements and from commercial learning analytics providers communicate and construct different ideologies. To support this analysis, I explore literature strands related to public higher education in America and the emerging field of study and practice called learning analytics. Learning analytics is the administrative, research, and instructional use of large sets of digital data that are associated with and generated by students. The data in question may be generated by incidental online activity, and it may be correlated with a host of other data related to student demographics or academic performance. The intention behind educational data systems is to find ways to use data to “optimize” instructional materials and practices by tailoring them to perceived student needs and behaviors, and to trigger “interventions” ranging from warning messages to prescribed courses of study. The use of data in this way raises questions about how such practices relate to the goals and ideals of higher education, especially as these data systems employ similar theories and techniques as those used by corporate juggernauts such as Facebook and Google. Questions not only related to privacy and ownership but also related to how learning, education, and the purpose of higher education are characterized, discussed, and defined in various discourses are explored in this study

    An interoperable electronic medical record-based platform for personalized predictive analytics

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Precision medicine refers to the delivering of customized treatment to patients based on their individual characteristics, and aims to reduce adverse events, improve diagnostic methods, and enhance the efficacy of therapies. Among efforts to achieve the goals of precision medicine, researchers have used observational data for developing predictive modeling to best predict health outcomes according to patients’ variables. Although numerous predictive models have been reported in the literature, not all models present high prediction power, and as the result, not all models may reach clinical settings to help healthcare professionals make clinical decisions at the point-of-care. The lack of generalizability stems from the fact that no comprehensive medical data repository exists that has the information of all patients in the target population. Even if the patients’ records were available from other sources, the datasets may need further processing prior to data analysis due to differences in the structure of databases and the coding systems used to record concepts. This project intends to fill the gap by introducing an interoperable solution that receives patient electronic health records via Health Level Seven (HL7) messaging standard from other data sources, transforms the records to observational medical outcomes partnership (OMOP) common data model (CDM) for population health research, and applies predictive models on patient data to make predictions about health outcomes. This project comprises of three studies. The first study introduces CCD-TOOMOP parser, and evaluates OMOP CDM to accommodate patient data transferred by HL7 consolidated continuity of care documents (CCDs). The second study explores how to adopt predictive model markup language (PMML) for standardizing dissemination of OMOP-based predictive models. Finally, the third study introduces Personalized Health Risk Scoring Tool (PHRST), a pilot, interoperable OMOP-based model scoring tool that processes the embedded models and generates risk scores in a real-time manner. The final product addresses objectives of precision medicine, and has the potentials to not only be employed at the point-of-care to deliver individualized treatment to patients, but also can contribute to health outcome research by easing collecting clinical outcomes across diverse medical centers independent of system specifications
    corecore