8 research outputs found

    Automatic Generation of Interpretable Lung Cancer Scoring Models from Chest X-Ray Images

    Full text link
    Lung cancer is the leading cause of cancer death worldwide with early detection being the key to a positive patient prognosis. Although a multitude of studies have demonstrated that machine learning, and particularly deep learning, techniques are effective at automatically diagnosing lung cancer, these techniques have yet to be clinically approved and adopted by the medical community. Most research in this field is focused on the narrow task of nodule detection to provide an artificial radiological second reading. We instead focus on extracting, from chest X-ray images, a wider range of pathologies associated with lung cancer using a computer vision model trained on a large dataset. We then find the set of best fit decision trees against an independent, smaller dataset for which lung cancer malignancy metadata is provided. For this small inferencing dataset, our best model achieves sensitivity and specificity of 85% and 75% respectively with a positive predictive value of 85% which is comparable to the performance of human radiologists. Furthermore, the decision trees created by this method may be considered as a starting point for refinement by medical experts into clinically usable multi-variate lung cancer scoring and diagnostic models

    FEATURES OF ICU ADMISSION IN X-RAY IMAGES OF COVID-19 PATIENTS

    Full text link
    X-ray images may present non-trivial features with predictive information of patients that develop severe symptoms of COVID-19. If true, this hypothesis may have practical value in allocating resources to particular patients while using a relatively inexpensive imaging technique. The difficulty of testing such a hypothesis comes from the need for large sets of labelled data, which need to be well-annotated and should contemplate the post-imaging severity outcome. This paper presents an original methodology for extracting semantic features that correlate to severity from a data set with patient ICU admission labels through interpretable models. The methodology employs a neural network trained to recognise lung pathologies to extract the semantic features, which are then analysed with low-complexity models to limit overfitting while increasing interpretability. This analysis points out that only a few features explain most of the variance between patients that developed severe symptoms. When applied to an unrelated larger data set with pathology-related clinical notes, the method has shown to be capable of selecting images for the learned features, which could translate some information about their common locations in the lung. Besides attesting separability on patients that eventually develop severe symptoms, the proposed methods represent a statistical approach highlighting the importance of features related to ICU admission that may have been only qualitatively reported. While handling limited data sets, notable methodological aspects are adopted, such as presenting a state-of-the-art lung segmentation network and the use of low-complexity models to avoid overfitting. The code for methodology and experiments is also available

    Editorial for the special issue “Advances in object and activity detection in remote sensing imagery”

    No full text
    Advances in data collection and accessibility, such as unmanned aerial vehicle (UAV) technology, the availability of satellite imagery, and the increasing performance of deep learning models, have had significant impacts on solving various remote sensing problems and proposing new applications ranging from vegetation and wildlife monitoring to crowd monitoring. This Special Issue contains seven high-quality papers [1,2,3,4,5,6,7] approaching problems relating to object detection, semantic segmentation, and multi-modal data alignment. In terms of the methods utilized, it is not surprising that six of the seven papers on this issue involve the application of deep learning. The papers also attest to the powerful aspect of the field where researchers can collaborate and validate their work on open-source models and datasets

    Vegetation high-impedance faults' high-frequency signatures via sparse coding

    No full text
    High-impedance faults (HIFs) behavior in power distribution systems depends on multiple factors, making it a challenging disturbance to model. Factors, such as network characteristics and impedance surface, can change the phenomena so intensely that insights about their behavior may not translate well between faults with different parameters. Signal processing techniques can help reveal patterns from specific types of fault, given the availability of sampled data from real faults. The methodology described in this article uses the shift-invariant sparse coding technique on a data set of staged vegetation HIFs to address this hypothesis. The technique facilitates the uncoupling of shifted and convoluted patterns present in the recorded fault signals, while a methodology to correlate them with fault occurrences is proposed. The investigation of underdiscussed high-frequency fault signals from a specific type of fault (small current vegetation HIFs) distinguishes this article from related works. The methodology to attest the found patterns as fault signatures and their analysis while using a particular high-frequency sampling method are key novel aspects presented. Nonetheless, the evidence of consistent behavior in real vegetation HIFs at higher frequencies that could assist their detection is the main contribution of this article. These results can enhance phenomena awareness and support future methodologies dealing with such disturbances

    MAVIDH Score: A COVID-19 Severity Scoring using Chest X-Ray Pathology Features

    Full text link
    The application of computer vision for COVID-19 diagnosis is complex and challenging, given the risks associated with patient misclassifications. Arguably, the primary value of medical imaging for COVID-19 lies rather on patient prognosis. Radiological images can guide physicians assessing the severity of the disease, and a series of images from the same patient at different stages can help to gauge disease progression. Based on these premises, a simple method based on lung-pathology features for scoring disease severity from Chest X-rays is proposed here. As the primary contribution, this method shows to be correlated to patient severity in different stages of disease progression comparatively well when contrasted with other existing methods. An original approach for data selection is also proposed, allowing the simple model to learn the severity-related features. It is hypothesized that the resulting competitive performance presented here is related to the method being feature-based rather than reliant on lung involvement or compromise as others in the literature. The fact that it is simpler and interpretable than other end-to-end, more complex models, also sets aside this work. As the data set is small, bias-inducing artifacts that could lead to overfitting are minimized through an image normalization and lung segmentation step at the learning phase. A second contribution comes from the validation of the results, conceptualized as the scoring of patients groups from different stages of the disease. Besides performing such validation on an independent data set, the results were also compared with other proposed scoring methods in the literature. The expressive results show that although imaging alone is not sufficient for assessing severity as a whole, there is a strong correlation with the scoring system, termed as MAVIDH score, with patient outcome

    Features of ICU admission in x-ray images of Covid-19 patients

    No full text
    This paper presents an original methodology for extracting semantic features from X-rays images that correlate to severity from a data set with patient ICU admission labels through interpretable models. The validation is partially performed by a proposed method that correlates the extracted features with a separate larger data set that does not contain the ICU-outcome labels. The analysis points out that a few features explain most of the variance between patients admitted in ICUs or not. The methods herein can be viewed as a statistical approach highlighting the importance of features related to ICU admission that may have been only qualitatively reported. In between features shown to be over-represented in the external data set were ones like ‘Consolidation’ (1.67), ‘Alveolar’ (1.33), and ‘Effusion’ (1.3). A brief analysis on the locations also showed higher frequency in labels like ‘Bilateral’ (1.58) and Peripheral (1.28) in patients labelled with higher chances to be admitted in ICU. To properly handle the limited data sets, a state-of-the-art lung segmentation network was also trained and presented, together with the use of low-complexity and interpretable models to avoid overfitting

    Current perspectives on corneal collagen crosslinking (CXL)

    No full text
    corecore