5 research outputs found

    Emergency response network design for hazardous materials transportation with uncertain demand

    Get PDF
    Transportation of hazardous materials play an essential role on keeping a friendly environment. Every day, a substantial amount of hazardous materials (hazmats), such as flammable liquids and poisonous gases, need to be transferred prior to consumption or disposal. Such transportation may result in unsuitable events for people and environment. Emergency response network is designed for this reason where specialist responding teams resolve any issue as quickly as possible. This study proposes a new multi-objective model to locate emergency response centers for transporting the hazardous materials. Since many real-world applications are faced with uncertainty in input parameters, the proposed model of this paper also assumes that reference and demand to such centre is subject to uncertainty, where demand is fuzzy random. The resulted problem formulation is modelled as nonlinear non-convex mixed integer programming and we used NSGAII method to solve the resulted problem. The performance of the proposed model is examined with several examples using various probability distribution and they are compared with the performance of other existing method

    Automatic summarisation of Instagram social network posts Combining semantic and statistical approaches

    Full text link
    The proliferation of data and text documents such as articles, web pages, books, social network posts, etc. on the Internet has created a fundamental challenge in various fields of text processing under the title of "automatic text summarisation". Manual processing and summarisation of large volumes of textual data is a very difficult, expensive, time-consuming and impossible process for human users. Text summarisation systems are divided into extractive and abstract categories. In the extractive summarisation method, the final summary of a text document is extracted from the important sentences of the same document without any modification. In this method, it is possible to repeat a series of sentences and to interfere with pronouns. However, in the abstract summarisation method, the final summary of a textual document is extracted from the meaning and significance of the sentences and words of the same document or other documents. Many of the works carried out have used extraction methods or abstracts to summarise the collection of web documents, each of which has advantages and disadvantages in the results obtained in terms of similarity or size. In this work, a crawler has been developed to extract popular text posts from the Instagram social network with appropriate preprocessing, and a set of extraction and abstraction algorithms have been combined to show how each of the abstraction algorithms can be used. Observations made on 820 popular text posts on the social network Instagram show the accuracy (80%) of the proposed system

    Global, regional, and national burden of colorectal cancer and its risk factors, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Funding: F Carvalho and E Fernandes acknowledge support from Fundação para a Ciência e a Tecnologia, I.P. (FCT), in the scope of the project UIDP/04378/2020 and UIDB/04378/2020 of the Research Unit on Applied Molecular Biosciences UCIBIO and the project LA/P/0140/2020 of the Associate Laboratory Institute for Health and Bioeconomy i4HB; FCT/MCTES through the project UIDB/50006/2020. J Conde acknowledges the European Research Council Starting Grant (ERC-StG-2019-848325). V M Costa acknowledges the grant SFRH/BHD/110001/2015, received by Portuguese national funds through Fundação para a Ciência e Tecnologia (FCT), IP, under the Norma Transitória DL57/2016/CP1334/CT0006.proofepub_ahead_of_prin

    Differentiation of COVID‐19 pneumonia from other lung diseases using CT radiomic features and machine learning : A large multicentric cohort study

    Get PDF
    To derive and validate an effective machine learning and radiomics‐based model to differentiate COVID‐19 pneumonia from other lung diseases using a large multi‐centric dataset. In this retrospective study, we collected 19 private and five public datasets of chest CT images, accumulating to 26 307 images (15 148 COVID‐19; 9657 other lung diseases including non‐COVID‐19 pneumonia, lung cancer, pulmonary embolism; 1502 normal cases). We tested 96 machine learning‐based models by cross‐combining four feature selectors (FSs) and eight dimensionality reduction techniques with eight classifiers. We trained and evaluated our models using three different strategies: #1, the whole dataset (15 148 COVID‐19 and 11 159 other); #2, a new dataset after excluding healthy individuals and COVID‐19 patients who did not have RT‐PCR results (12 419 COVID‐19 and 8278 other); and #3 only non‐COVID‐19 pneumonia patients and a random sample of COVID‐19 patients (3000 COVID‐19 and 2582 others) to provide balanced classes. The best models were chosen by one‐standard‐deviation rule in 10‐fold cross‐validation and evaluated on the hold out test sets for reporting. In strategy#1, Relief FS combined with random forest (RF) classifier resulted in the highest performance (accuracy = 0.96, AUC = 0.99, sensitivity = 0.98, specificity = 0.94, PPV = 0.96, and NPV = 0.96). In strategy#2, Recursive Feature Elimination (RFE) FS and RF classifier combination resulted in the highest performance (accuracy = 0.97, AUC = 0.99, sensitivity = 0.98, specificity = 0.95, PPV = 0.96, NPV = 0.98). Finally, in strategy #3, the ANOVA FS and RF classifier combination resulted in the highest performance (accuracy = 0.94, AUC =0.98, sensitivity = 0.96, specificity = 0.93, PPV = 0.93, NPV = 0.96). Lung radiomic features combined with machine learning algorithms can enable the effective diagnosis of COVID‐19 pneumonia in CT images without the use of additional tests

    COVID-19 prognostic modeling using CT radiomic features and machine learning algorithms: Analysis of a multi-institutional dataset of 14,339 patients

    Get PDF
    Background: We aimed to analyze the prognostic power of CT-based radiomics models using data of 14,339 COVID-19 patients. Methods: Whole lung segmentations were performed automatically using a deep learning-based model to extract 107 intensity and texture radiomics features. We used four feature selection algorithms and seven classifiers. We evaluated the models using ten different splitting and cross-validation strategies, including non-harmonized and ComBat-harmonized datasets. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were reported. Results: In the test dataset (4,301) consisting of CT and/or RT-PCR positive cases, AUC, sensitivity, and specificity of 0.83 ± 0.01 (CI95%: 0.81-0.85), 0.81, and 0.72, respectively, were obtained by ANOVA feature selector + Random Forest (RF) classifier. Similar results were achieved in RT-PCR-only positive test sets (3,644). In ComBat harmonized dataset, Relief feature selector + RF classifier resulted in the highest performance of AUC, reaching 0.83 ± 0.01 (CI95%: 0.81-0.85), with a sensitivity and specificity of 0.77 and 0.74, respectively. ComBat harmonization did not depict statistically significant improvement compared to a non-harmonized dataset. In leave-one-center-out, the combination of ANOVA feature selector and RF classifier resulted in the highest performance. Conclusion: Lung CT radiomics features can be used for robust prognostic modeling of COVID-19. The predictive power of the proposed CT radiomics model is more reliable when using a large multicentric heterogeneous dataset, and may be used prospectively in clinical setting to manage COVID-19 patients.</p
    corecore