48 research outputs found

    Ent cobra ontology: the covariates classification system proposed by the Head & Neck and Skin GEC-ESTRO Working Group for interdisciplinary standardized data collection in head and neck patient cohorts treated with interventional radiotherapy (brachytherapy)

    Get PDF
    Purpose: Clinical data collecting is expensive in terms of time and human resources. Data can be collected in different ways; therefore, performing multicentric research based on previously stored data is often difficult. The primary objective of the ENT COBRA (COnsortium for BRachytherapy data Analysis) ontology is to define a specific terminological system to standardized data collection for head and neck (H&N) cancer patients treated with interventional radiotherapy. Material and methods: ENT-COBRA is a consortium for standardized data collection for H&N patients treated with interventional radiotherapy. It is linked to H&N and Skin GEC-ESTRO Working Group and includes 11 centers from 6 countries. Its ontology was firstly defined by a multicentric working group, then evaluated by the consortium followed by a multi-professional technical commission involving a mathematician, an engineer, a physician with experience in data storage, a programmer, and a software expert. Results: Two hundred and forty variables were defined on 13 input forms. There are 3 levels, each offering a specific type of analysis: 1. Registry level (epidemiology analysis); 2. Procedures level (standard oncology analysis); 3. Research level (radiomics analysis). The ontology was approved by the consortium and technical commission; an ad-hoc software architecture (\u201cbroker\u201d) remaps the data present in already existing storage systems of the various centers according to the shared terminology system. The first data sharing was successfully performed using COBRA software and the ENT COBRA Ontology, automatically collecting data directly from 3 different hospital databases (L\ufcbeck, Navarra, and Rome) in November 2017. Conclusions: The COBRA Ontology is a good response to the multi-dimensional criticalities of data collection, retrieval, and usability. It allows to create a software for large multicentric databases with implementation of specific remapping functions wherever necessary. This approach is well-received by all involved parties, primarily because it does not change a single center\u2019s storing technologies, procedures, and habits

    Generator breast datamart\u2014the novel breast cancer data discovery system for research and monitoring: Preliminary results and future perspectives

    Get PDF
    Background: Artificial Intelligence (AI) is increasingly used for process management in daily life. In the medical field AI is becoming part of computerized systems to manage information and encourage the generation of evidence. Here we present the development of the application of AI to IT systems present in the hospital, for the creation of a DataMart for the management of clinical and research processes in the field of breast cancer. Materials and methods: A multidisciplinary team of radiation oncologists, epidemiologists, medical oncologists, breast surgeons, data scientists, and data management experts worked together to identify relevant data and sources located inside the hospital system. Combinations of open-source data science packages and industry solutions were used to design the target framework. To validate the DataMart directly on real-life cases, the working team defined tumoral pathology and clinical purposes of proof of concepts (PoCs). Results: Data were classified into \u201cNot organized, not \u2018ontologized\u2019 data\u201d, \u201cOrganized, not \u2018ontologized\u2019 data\u201d, and \u201cOrganized and \u2018ontologized\u2019 data\u201d. Archives of real-world data (RWD) identified were platform based on ontology, hospital data warehouse, PDF documents, and electronic reports. Data extraction was performed by direct connection with structured data or text-mining technology. Two PoCs were performed, by which waiting time interval for radiotherapy and performance index of breast unit were tested and resulted available. Conclusions: GENERATOR Breast DataMart was created for supporting breast cancer pathways of care. An AI-based process automatically extracts data from different sources and uses them for generating trend studies and clinical evidence. Further studies and more proof of concepts are needed to exploit all the potentials of this system

    A machine-learning parsimonious multivariable predictive model of mortality risk in patients with Covid-19

    Get PDF
    The COVID-19 pandemic is impressively challenging the healthcare system. Several prognostic models have been validated but few of them are implemented in daily practice. The objective of the study was to validate a machine-learning risk prediction model using easy-to-obtain parameters to help to identify patients with COVID-19 who are at higher risk of death. The training cohort included all patients admitted to Fondazione Policlinico Gemelli with COVID-19 from March 5, 2020, to November 5, 2020. Afterward, the model was tested on all patients admitted to the same hospital with COVID-19 from November 6, 2020, to February 5, 2021. The primary outcome was in-hospital case-fatality risk. The out-of-sample performance of the model was estimated from the training set in terms of Area under the Receiving Operator Curve (AUROC) and classification matrix statistics by averaging the results of fivefold cross validation repeated 3-times and comparing the results with those obtained on the test set. An explanation analysis of the model, based on the SHapley Additive exPlanations (SHAP), is also presented. To assess the subsequent time evolution, the change in paO2/FiO2 (P/F) at 48 h after the baseline measurement was plotted against its baseline value. Among the 921 patients included in the training cohort, 120 died (13%). Variables selected for the model were age, platelet count, SpO2, blood urea nitrogen (BUN), hemoglobin, C-reactive protein, neutrophil count, and sodium. The results of the fivefold cross-validation repeated 3-times gave AUROC of 0.87, and statistics of the classification matrix to the Youden index as follows: sensitivity 0.840, specificity 0.774, negative predictive value 0.971. Then, the model was tested on a new population (n = 1463) in which the case-fatality rate was 22.6%. The test model showed AUROC 0.818, sensitivity 0.813, specificity 0.650, negative predictive value 0.922. Considering the first quartile of the predicted risk score (low-risk score group), the case-fatality rate was 1.6%, 17.8% in the second and third quartile (high-risk score group) and 53.5% in the fourth quartile (very high-risk score group). The three risk score groups showed good discrimination for the P/F value at admission, and a positive correlation was found for the low-risk class to P/F at 48 h after admission (adjusted R-squared = 0.48). We developed a predictive model of death for people with SARS-CoV-2 infection by including only easy-to-obtain variables (abnormal blood count, BUN, C-reactive protein, sodium and lower SpO2). It demonstrated good accuracy and high power of discrimination. The simplicity of the model makes the risk prediction applicable for patients in the Emergency Department, or during hospitalization. Although it is reasonable to assume that the model is also applicable in not-hospitalized persons, only appropriate studies can assess the accuracy of the model also for persons at home

    The image biomarker standardization initiative: Standardized convolutional filters for reproducible radiomics and enhanced clinical insights

    Get PDF
    Standardizing convolutional filters that enhance specific structures and patterns in medical imaging enables reproducible radiomics analyses, improving consistency and reliability for enhanced clinical insights. Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations Ă— three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking

    A framework for event log generation and knowledge representation for process mining in healthcare

    No full text
    Process Mining is of growing importance in the healthcare domain, where the quality of delivered services depends on the suitable and efficient execution of processes encoding the vast amount of clinical knowledge gained via the evidence-based medicine paradigm. In particular, to assess and measure the quality of delivered treatments, there is a strong interest in tools able to perform conformance checking. In process mining for the healthcare domain, a number of major challenges are posed by: (i) the complexity of involved data, that refers to patients' aspects such as disease, behaviour, clinical history, psychology, etc; (ii) the availability of data, that come from the heterogeneous, fragmented and scant connected healthcare system; and (iii) the wide range of available standards for communication (DICOM, IHE, etc.) or data representation (ICD9, SNOMED, etc.) purposes. To effectively perform process mining in the healthcare domain, it is crucial to build event logs capturing all the steps of running processes, which have to be derived by the knowledge stored in the Electronic Health Records. It is therefore crucial to cope with aforementioned data-related challenges. In this paper, we aim at supporting the exploitation of process mining in the healthcare domain, particularly with regards to conformance checking. We therefore introduce a set of specifically-designed techniques, provided as a suite of software packages written in R. In particular, the suite provides a flexible and agile way to automatically and reliably build Event Log from clinical data sources, and to effectively perform conformance checking

    An empirical analysis of predictors for workload estimation in healthcare

    No full text
    The limited availability of resources makes the resource allocation strategy a pivotal aspect for every clinical department. Allocation is usually done on the basis of a workload estimation, which is performed by human experts. Experts have to dedicate a significant amount of time to the workload estimation, and the usefulness of estimations depends on the expert’s ability to understand very different conditions and situations. Machine learning-based predictors can help in reduce the burden on human experts, and can provide some guarantees at least in terms of repeatability of the delivered performance. However, it is unclear how good their estimations would be, compared to those of experts. In this paper we address this question by exploiting 6 algorithms for estimating the workload of future activities of a real-world department. Results suggest that this is a promising avenue for future investigations aimed to optimising the use of resources of clinical departments

    On the feasibility of distributed process mining in healthcare

    No full text
    Process mining is gaining significant importance in the healthcare domain, where the quality of services depends on the suitable and efficient execution of processes. A pivotal challenge for the application of process mining in the healthcare domain comes from the growing importance of multi-centric studies, where privacy-preserving techniques are strongly needed. In this paper, building on top of the well-known Alpha algorithm, we introduce a distributed process mining approach, that allows to overcome problems related to privacy and data being spread around. The introduced technique allows to perform process mining without sharing any patients-related information, thus ensuring privacy and maximizing the possibility of cooperation among hospitals

    Process mining to optimize palliative patient flow in a high-volume radiotherapy department

    No full text
    Introduction: In radiotherapy, palliative patients are often suboptimal managed and patients experience long waiting times. Event-logs (recorded local files) of palliative patients, could provide a continuative decision-making system by means of shared guidelines to improve patient flow. Based on an event-log analysis, we aimed to accurately understand how to successively optimize patient flow in palliative care. Methods: A process mining methodology was applied on palliative patient flow in a high-volume radiotherapy department. Five hundred palliative radiation treatment plans of patients with bone and brain metastases were included in the study, corresponding to 290 patients treated in our department in 2018. Event-logs and the relative attributes were extracted and organized. A process discovery algorithm was applied to describe the real process model, which produced the event-log. Finally, conformance checking was performed to analyze how the acquired event-log database works in a predefined theoretical process model. Results: Based on the process discovery algorithm, 53 (10%) plans had a dose prescription of 8 Gy, 249 (49.8%) plans had a dose prescription of 20 Gy and 159 (31.8%) plans had a dose prescription of 30 Gy. The remaining 39 (7.8%) plans had different dose prescriptions. Considering a median value, conformance checking demonstrated that event-logs work in the theoretical model. Conclusions: The obtained results partially validate and support the palliative patient care guideline implemented in our department. Process mining can be used to provide new insights, which facilitate the improvement of existing palliative patient care flows

    Stability of dosomics features extraction on grid resolution and algorithm for radiotherapy dose calculation

    No full text
    Purpose: Dosomics is a novel texture analysis method to parameterize regions of interest and to produce dose features that encode the spatial and statistical distribution of radiotherapy dose at higher resolution than organ-level dose-volume histograms. This study investigates the stability of dosomics features extraction, as their variation due to changes of grid resolution and algorithm dose calculation. Material and Methods: Dataset has been generated considering all the possible combinations of four grid resolutions and two algorithms dose calculation of 18 clinical delivered dose distributions, leading to a 144 3D dose distributions dataset. Dosomics features extraction has been performed with an in-house developed software. A total number of 214 dosomics features has been extracted from four different region of interest: PTV, the two closest OARs and a RING structure. Reproducibility and stability of each extracted dosomic feature (Rfe, Sfe), have been analyzed in terms of intraclass correlation coefficient (ICC) and coefficient of variation. Results: Dosomics features extraction was found reproducible (ICC > 0.99). Dosomic features, across the combination of grid resolutions and algorithms dose calculation, are more stable in the RING for all the considered feature's families. Sfe is higher in OARs, in particular for GLSZM features’ families. Highest Sfe have been found in the PTV, in particular in the GLCM features’ family. Conclusion: Stability and reproducibility of dosomics features have been evaluated for a representative clinical dose distribution case mix. These results suggest that, in terms of stability, dosomic studies should always perform a reporting of grid resolution and algorithm dose calculation

    EP-1937 Distributed AUC algorithm: a privacypreserving approach to measure the performance of Cox models:a privacy-preserving approach to measure the performance of Cox models

    No full text
    Recent years have brought both a notable rise in theability to efficiently harvest vast amounts of information,and a concurrent effort in preserving and actuallyenforcing the privacy of patients and their related data,as evidenced by the European GDPR. In these conditions,the Distributed Learning Ecosystem has shown greatpotential in allowing researchers to pool the huge amountsof sensitive data need to develop and validate predictionmodels in a privacy preserving way and with an eyetowards personalized medicine.The aim of this abstract is to propose a privacy-preservingstrategy for measuring the performance of CoxProportional Hazard (PH) model
    corecore