51 research outputs found

    Development of minimal basic data set to report COVID-19

    Get PDF
    Background: Effective surveillance of COVID-19 highlights the importance of rapid, valid, and standardized information to crisis monitoring and prompts clinical interventions. Minimal basic data set (MBDS) is a set of metrics to be collated in a standard approach to allow aggregated use of data for clinical purposes and research. Data standardization enables accurate comparability of collected data, and accordingly, enhanced generalization of findings. The aim of this study is to establish a core set of data to characterize COVID-19 to consolidate clinical practice. Methods: A 3-step sequential approach was used in this study: (1) an elementary list of data were collected from the existing information systems and data sets; (2) a systematic literature review was conducted to extract evidence supporting the development of MBDS; and (3) a 2-round Delphi survey was done for reaching consensus on data elements to include in COVID-19 MBDS and for its robust validation. Results: In total, 643 studies were identified, of which 38 met the inclusion criteria, where a total of 149 items were identified in the data sources. The data elements were classified by 3 experts and validated via a 2-round Delphi procedure. Finally, 125 data elements were confirmed as the MBDS. Conclusion: The development of COVID-19 MBDS could provide a basis for meaningful evaluations, reporting, and benchmarking COVID-19 disease across regions and countries. It could also provide scientific collaboration for care providers in the field, which may lead to improved quality of documentation, clinical care, and research outcomes

    Performance evaluation of selected decision tree algorithms for COVID-19 diagnosis using routine clinical data

    No full text
    Background: The novel 2019 Coronavirus disease (COVID-19) poses a great threat to global public health and the economy. The earlier detection of COVID-19 is the key to its treatment and mitigating the transmission of the virus. Given that Machine Learning (ML) could be potentially useful in COVID-19 identification, we compared 7 decision tree (DT) algorithms to select the best clinical diagnostic model. Methods: A hospital-based retrospective dataset was used to train the selected DT algorithms. The performance of DT models was measured using performance criteria, such as accuracy, sensitivity, specificity, receiver operating characteristic (ROC), and precision-recall curves (PRC). Finally, the best decision model was obtained based on comparing the mentioned performance criteria. Results: Based on the Gini Index (GI) scoring model, 13 diagnostic criteria, including the lung lesion existence (GI= 0217), fever (GI= 0.205), history of contact with suspected people (GI= 0.188), O2 saturation rate in the blood (GI= 0.181), rhinorrhea (GI= 0.177), dyspnea (GI = 0.177), cough (GI = 0.159), history of taking the immunosuppressive drug (GI= 0.145), history of respiratory failure (ARDS) (GI= 0.141), lung lesion situation (GI= 0.133) and appearance (GI= 0.126), diarrhea (GI= 0.112), and nausea and vomiting (GI = 0.092) have been obtained as the most important criteria in diagnosing COVID-19. The results indicated that the J-48, with the accuracy= 0.85, F-Score= 0.85, ROC= 0.926, and PRC= 0.93, had the best performance for diagnosing COVID-19. Conclusion: According to the empirical results, it is promising to implement J-48 in health care settings to increase the accuracy and speed of COVID-19 diagnosis. © 2021. Iran University of Medical Sciences

    Developing a Minimum Dataset for a Mobile-based Contact Tracing System for the COVID-19 Pandemic

    No full text
    Context: Contact tracing is a cornerstone community-based measure for augmenting public health response preparedness to epidemic diseases such as the current coronavirus disease 2019 (COVID-19). However, there is no an agreed data collection tool for the unified reporting of COVID-19 contact tracing efforts at the national level. Objectives: The purpose of this research was to determine the COVID-19 Contact Tracing Minimal Dataset (COV-CT-MDS) as a prerequisite to develop a mobile-based contact tracing system for the COVID-19 outbreak. Methods: This study was carried out in 2020 by a combination of literature review coupled with a two-round Delphi survey. First, the probable data elements were identified using an extensive literature review in scientific databases, including PubMed, Scopus, ProQuest, Science Direct, and Web of Science (WOS). Then, the core data elements were validated using a two-round Delphi survey. Results: Out of 388 articles, 24 were eligible to be included in the study. By the full-text study of the included articles and after the Delphi survey, the designed COV-CT-MDS was categorized into two clinical and administrative data sections, nine data classes, and 81 data fields. Conclusions: COV-CT-MDS is an efficient and valid tool that could provide a basis for collecting comprehensive and standardized data on COVID-19 contact tracing. It could also provide scientific teamwork for health care authorities, which may lead to the enhanced quality of documentation, research, and surveillance outcomes. © 2021, Author(s)

    Internet of Things (IoT) Adoption Model for Early Identification and Monitoring of COVID-19 Cases: A Systematic Review

    No full text
    Background: The 2019 coronavirus disease (COVID-19) is a mysterious and highly infectious disease that was declared a pandemic by the World Health Organization. The virus poses a great threat to global health and the economy. Currently, in the absence of effective treatment or vaccine, leveraging advanced digital technologies is of great importance. In this respect, the Internet of Things (IoT) is useful for smart monitoring and tracing of COVID-19. Therefore, in this study, we have reviewed the literature available on the IoT-enabled solutions to tackle the current COVID-19 outbreak. Methods: This systematic literature review was conducted using an electronic search of articles in the PubMed, Google Scholar, ProQuest, Scopus, Science Direct, and Web of Science databases to formulate a complete view of the IoT-enabled solutions to monitoring and tracing of COVID-19 according to the FITT (Fit between Individual, Task, and Technology) model. Results: In the literature review, 28 articles were identified as eligible for analysis. This review provides an overview of technological adoption of IoT in COVID-19 to identify significant users, either primary or secondary, required technologies including technical platform, exchange, processing, storage and added-value technologies, and system tasks or applications at "on-body," "in-clinic/hospital," and even "in-community" levels. Conclusions: The use of IoT along with advanced intelligence and computing technologies for ubiquitous monitoring and tracking of patients in quarantine has made it a critical aspect in fighting the spread of the current COVID-19 and even future pandemics

    Determining of optimal telemedicine communication technologies with regards to network interactive modes: A Delphi survey

    No full text
    Introduction: Different communication services with varying bandwidth are used to send information in the form of telemedicine technology. Bandwidth management, as defined in telemedicine technology, refers to using the desirable communication services according to the type of transaction and the information size to be transferred. Selection of communication services must be in such a way to result in minimum latency in the process of sending information and maintaining maximum cost-effectiveness. Material and Methods: This is an applied research which was conducted in 2019 by questionnaire survey amongst 60 participants, specialized in health information technology and medical informatics, who are working in hospitals and educational institutions of Tehran. Likert rating scale was used to quantify the research questions. Finally, by analyzing each weighted average, this study revealed the desirable communication services that correspond to the required transactions for deployment of telemedicine. Results: Transfer of multimedia information, using synchronized teleconferencing via primary low bandwidth technologies, had the lowest number average (0.96) and transmission of hybrid data (combination of picture, text, multimedia templates in synchronized or asynchronized modes) via Asymmetric Digital Subscriber Line (ADSL) technology had the highest average (4.96). Conclusion: Selection of communication services, with regard to its convergence with the information size and the type of their application, plays a significant role in controlling network traffic and preventing latency in the process of sending information in the context of telemedicine technology. High bandwidth communication services should be used for those telemedicine systems, which are offering services to many users, as well as those in which real-time transmission of information is essential. It needs to be pointed out that with regard to the cost-effectiveness of sending information, it is necessary to use low-cost services with low bandwidth for transfer of light weight information as well as for asynchronous applications in which latency in the process of information transfer is not detrimental. © 2021, Iranian Medical Informatics Association (IrMIA). All rights reserved

    Designing a standardized framework for data integration between zoonotic diseases systems: Towards one health surveillance

    No full text
    Background: Zoonotic diseases or zoonoses account for a considerable ratio of infectious diseases outbreak; their effective surveillance demands coordinated actions by human and animal health organizations. However, zoonoses surveillance data are collected individually from standalone information systems for either humans or animals with varied structures, processes and applications. In moving towards one health (OH) surveillance, integrating the zoonoses data may help prevent and control these diseases. Therefore, this research aimed to determine essential data elements and a consistent reporting template towards interoperability. Material and methods: In this study, first, the zoonotic diseases minimum dataset (ZD-MDS) was identified according to a comprehensive literature review coupled with the agreements of experts. Then, the ZD-MDS was mapped to structured clinical vocabularies. Also, the health level seven-clinical document architecture (HL7-CDA) standard was used to define the interoperable and human-machine reporting template. Results: The ZD-MDS was divided into administrative and clinical sections with five and seven data classes and total of 38 and 57 data elements, respectively. Then, the corresponding data values and systematized nomenclature of medicine-clinical terms (SNOMED-CT) codes were defined for each data element. The reporting template was structured according to three sections of CDA template, extensible markup language (XML) hierarchical and tags. Conclusion: Our study suggested that zoonoses surveillance could be improved by integrating and exchanging data from different databases across humans and animal organizations. The developed template provided a comprehensive and interoperable dataset, making data more comparable and reportable across multiple studies and settings. © 202

    Developing a clinical decision support system based on the fuzzy logic and decision tree to predict colorectal cancer

    No full text
    Background: Colorectal Cancer (CRC) is the most prevalent digestive system- related cancer and has become one of the deadliest diseases worldwide. Given the poor prognosis of CRC, it is of great importance to make a more accurate prediction of this disease. Early CRC detection using computational technologies can significantly improve the overall survival possibility of patients. Hence this study was aimed to develop a fuzzy logic-based clinical decision support system (FL-based CDSS) for the detection of CRC patients. Methods: This study was conducted in 2020 using the data related to CRC and non-CRC patients, which included the 1162 cases in the Masoud internal clinic, Tehran, Iran. The chi-square method was used to determine the most important risk factors in predicting CRC. Furthermore, the C4.5 decision tree was used to extract the rules. Finally, the FL-based CDSS was designed in a MATLAB environment and its performance was evaluated by a confusion matrix. Results: Eleven features were selected as the most important factors. After fuzzification of the qualitative variables and evaluation of the decision support system (DSS) using the confusion matrix, the accuracy, specificity, and sensitivity of the system was yielded 0.96, 0.97, and 0.96, respectively. Conclusion: We concluded that developing the CDSS in this field can provide an earlier diagnosis of CRC, leading to a timely treatment, which could decrease the CRC mortality rate in the community

    Design and development of a web-based registry for Coronavirus (COVID-19) disease

    No full text
    Background: The 2019 coronavirus (COVID-19) is a highly contagious disease associated with a high morbidity and mortality worldwide. The accumulation of data through a prospective clinical registry enables public health authorities to make informed decisions based on real evidence obtained from surveillance of COVID-19. This registry is also fundamental to providing robust infrastructure for future research surveys. The purpose of this study was to design a registry and its minimum data set (MDS), as a valid and reliable data source for reporting and benchmarking COVID-19. Methods: This cross sectional and descriptive study provides a template for the required MDS to be included in COVID-19 registry. This was done by an extensive literature review and 2 round Delphi survey to validate the content, which resulted in a web-based registry created by Visual Studio 2019 and a database designed by Structured Query Language (SQL). Results: The MDS of COVID-19 registry was categorized into the administrative part with 3 sections, including 30 data elements, and the clinical part with 4 sections, including 26 data elements. Furthermore, a web-based registry with modular and layered architecture was designed based on final data classes and elements. Conclusion: To the best of our knowledge, COVID-19 registry is the first designed instrument from information management perspectives in Iran and can become a homogenous and reliable infrastructure for collecting data on COVID-19. We hope this approach will facilitate epidemiological surveys and support policymakers to better plan for monitoring patients with COVID-19. © Iran University of Medical Sciences

    Predicting the Need for Intubation among COVID-19 Patients Using Machine Learning Algorithms: A Single-Center Study

    No full text
    Background: Owing to the shortage of ventilators, there is a crucial demand for an objective and accurate prognosis for 2019 coronavirus disease (COVID-19) critical patients, which may necessitate a mechanical ventilator (MV). This study aimed to construct a predictive model using machine learning (ML) algorithms for frontline clinicians to better triage endangered patients and priorities who would need MV. Methods: In this retrospective single-center study, the data of 482 COVID-19 patients from February 9, 2020, to December 20, 2020, were analyzed by several ML algorithms including, multi-layer perception (MLP), logistic regression (LR), J-48 decision tree, and Naïve Bayes (NB). First, the most important clinical variables were identified using the Chi-square test at P < 0.01. Then, by comparing the ML algorithms' performance using some evaluation criteria, including TP-Rate, FP-Rate, precision, recall, F-Score, MCC, and Kappa, the best performing one was identified. Results: Predictive models were trained using 15 validated features, including cough, contusion, oxygen therapy, dyspnea, loss of taste, rhinorrhea, blood pressure, absolute lymphocyte count, pleural fluid, activated partial thromboplastin time, blood glucose, white cell count, cardiac diseases, length of hospitalization, and other underline diseases. The results indicated the J-48 with F-score = 0.868 and AUC = 0.892 yielded the best performance for predicting intubation requirement. Conclusion: ML algorithms are potentials to improve traditional clinical criteria to forecast the necessity for intubation in COVID-19 in-hospital patients. Such ML-based prediction models may help physicians with optimizing the timing of intubation, better sharing of MV resources and personnel, and increase patient clinical status. © 2022 Iran University of Medical Sciences. All Rights Reserved

    Developing an artificial neural network for detecting COVID-19 disease

    No full text
    BACKGROUND: From December 2019, atypical pneumonia termed COVID-19 has been increasing exponentially across the world. It poses a great threat and challenge to world health and the economy. Medical specialists face uncertainty in making decisions based on their judgment for COVID-19. Thus, this study aimed to establish an intelligent model based on artificial neural networks (ANNs) for diagnosing COVID-19. MATERIALS AND METHODS: Using a single-center registry, we studied the records of 250 confirmed COVID-19 and 150 negative cases from February 9, 2020, to October 20, 2020. The correlation coefficient technique was used to determine the most significant variables of the ANN model. The variables at P < 0.05 were used for model construction. We applied the back-propagation technique for training a neural network on the dataset. After comparing different neural network configurations, the best configuration of ANN was acquired, then its strength has been evaluated. RESULTS: After the feature selection process, a total of 18 variables were determined as the most relevant predictors for developing the ANN models. The results indicated that two nested loops' architecture of 9-10-15-2 (10 and 15 neurons used in layer 1 and layer 2, respectively) with the area under the curve of 0.982, the sensitivity of 96.4, specificity of 90.6, and accuracy of 94 was introduced as the best configuration model for COVID-19 diagnosis. CONCLUSION: The proposed ANN-based clinical decision support system could be considered as a suitable computational technique for the frontline practitioner in early detection, effective intervention, and possibly a reduction of mortality in patients with COVID-19. © 2022 Journal of Education and Health Promotion
    • …
    corecore