140 research outputs found
Tuberculosis Disease Detection through CXR Images based on Deep Neural Network Approach
Tuberculosis (TB) is a disease that, if left untreated for an extended period of time, can ultimately be fatal. Early TB detection can be aided by using a deep learning ensemble. In previous work, ensemble classifiers were only trained on images that shared similar characteristics. It is necessary for an ensemble to produce a diverse set of errors in order for it to be useful; this can be accomplished by making use of a number of different classifiers and/or features. In light of this, a brand-new framework has been constructed in this study for the purpose of segmenting and identifying TB in human Chest X-ray. It was determined that searching traditional web databases for chest X-ray was necessary. At this point, we pass the photos that we have collected over to Swin ResUnet3 so that they may be segmented. After the segmented chest X-ray have been provided to it, the Multi-scale Attention-based Densenet with Extreme Learning Machine (MAD-ELM) model will be applied in the detection stage in order to effectively diagnose tuberculosis from human chest X-ray. This will be done in order to maximize efficiency. Because it increased the variety of errors made by the basic classifiers, the supplied variation of the approach that was proposed was able to detect tuberculosis more effectively. The proposed ensemble method produced results with an accuracy of 94.2 percent, which are comparable to those obtained by past efforts
Recognition of corona virus disease (COVID-19) using deep learning network
Corona virus disease (COVID-19) has an incredible influence in the last few months. It causes thousands of deaths in round the world. This make a rapid research movement to deal with this new virus. As a computer science, many technical researches have been done to tackle with it by using image processing algorithms. In this work, we introduce a method based on deep learning networks to classify COVID-19 based on x-ray images. Our results are encouraging to rely on to classify the infected people from the normal. We conduct our experiments on recent dataset, Kaggle dataset of COVID-19 X-ray images and using ResNet50 deep learning network with 5 and 10 folds cross validation. The experiments results show that 5 folds gives effective results than 10 folds with accuracy rate 97.28%
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
A deep learning based dual encoder–decoder framework for anatomical structure segmentation in chest X-ray images
Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder–decoder convolutional neural network (CNN). The first network in the dual encoder–decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network’s representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder–decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods
Automatic Chest X-rays Analysis using Statistical Machine Learning Strategies
Tuberculosis (TB) is a disease responsible for the deaths of more than one million people worldwide every year. Even though it is preventable and curable, it remains a major threat to humanity that needs to be taken care of. It is often diagnosed in developed countries using approaches such as sputum smear microscopy and culture methods. However, since these approaches are rather expensive, they are not commonly used in poor regions of the globe such as India, Africa, and Bangladesh. Instead, the well known and affordable chest x-ray (CXR) interpretation by radiologists is the technique employed in those places. Nevertheless, if this method is obsolete in other parts of the world nowadays it is because of its many flaws including: i) it is a tedious task that requires experienced medical personnel --which is scarce given the high demand for it--, ii) it is manual and difficult when executed for a large population, and iii) it is prone to human error depending on the proficiency and aptitude of the interpreter. Researchers have thus been trying to overcome these challenges over the years by proposing software solutions that mainly involve computer vision, artificial intelligence, and machine learning. The problems with these existing solutions are that they are either complex or not reliable enough. The need for better solutions in this specific domain as well as my desire to bring my contribution to something meaningful are what led us to investigate in this direction.
In this manuscript, I propose a simple fully automatic software solution that uses only machine learning and image processing to analyze and detect anomalies related to TB in CXR scans. My system starts by extracting the region of interest from the incoming images, then performs a computationally inexpensive yet efficient feature extraction that involves edge detection using Laplacian of Gaussian and positional information retention. The extracted features are then fed to a regular random forest classifier for discrimination. I tested the system on two benchmark data collections --Montgomery and Shenzhen-- and obtained state-of-the-art results that reach up to 97% classification accuracy
Recommended from our members
From Fully-Supervised, Single-Task to Scarcely-Supervised, Multi-Task Deep Learning for Medical Image Analysis
Image analysis based on machine learning has gained prominence with the advent of deep learning, particularly in medical imaging. To be effective in addressing challenging image analysis tasks, however, conventional deep neural networks require large corpora of annotated training data, which are unfortunately scarce in the medical domain, thus often rendering fully-supervised learning strategies ineffective.This thesis devises for use in a variety of medical image analysis applications a series of novel deep learning methods, ranging from fully-supervised, single-task learning to scarcely-supervised, multi-task learning that makes efficient use of annotated training data. Specifically, its main contributions include (1) fully-supervised, single-task learning for the segmentation of pulmonary lobes from chest CT scans and the analysis of scoliosis from spine X-ray images; (2) supervised, single-task, domain-generalized pulmonary segmentation in chest X-ray images and retinal vasculature segmentation in fundoscopic images; (3) largely-unsupervised, multiple-task learning via deep generative modeling for the joint synthesis and classification of medical image data; and (4) partly-supervised, multiple-task learning for the combined segmentation and classification of chest and spine X-ray images
Is attention all you need in medical image analysis? A review
Medical imaging is a key component in clinical diagnosis, treatment planning
and clinical trial design, accounting for almost 90% of all healthcare data.
CNNs achieved performance gains in medical image analysis (MIA) over the last
years. CNNs can efficiently model local pixel interactions and be trained on
small-scale MI data. The main disadvantage of typical CNN models is that they
ignore global pixel relationships within images, which limits their
generalisation ability to understand out-of-distribution data with different
'global' information. The recent progress of Artificial Intelligence gave rise
to Transformers, which can learn global relationships from data. However, full
Transformer models need to be trained on large-scale data and involve
tremendous computational complexity. Attention and Transformer compartments
(Transf/Attention) which can well maintain properties for modelling global
relationships, have been proposed as lighter alternatives of full Transformers.
Recently, there is an increasing trend to co-pollinate complementary
local-global properties from CNN and Transf/Attention architectures, which led
to a new era of hybrid models. The past years have witnessed substantial growth
in hybrid CNN-Transf/Attention models across diverse MIA problems. In this
systematic review, we survey existing hybrid CNN-Transf/Attention models,
review and unravel key architectural designs, analyse breakthroughs, and
evaluate current and future opportunities as well as challenges. We also
introduced a comprehensive analysis framework on generalisation opportunities
of scientific and clinical impact, based on which new data-driven domain
generalisation and adaptation methods can be stimulated
- …