206 research outputs found
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases
Cardiothoracic and pulmonary diseases are a significant cause of mortality and morbidity worldwide. The COVID-19 pandemic has highlighted the lack of access to clinical care, the overburdened medical system, and the potential of artificial intelligence (AI) in improving medicine. There are a variety of diseases affecting the cardiopulmonary system including lung cancers, heart disease, tuberculosis (TB), etc., in addition to COVID-19-related diseases. Screening, diagnosis, and management of cardiopulmonary diseases has become difficult owing to the limited availability of diagnostic tools and experts, particularly in resource-limited regions. Early screening, accurate diagnosis and staging of these diseases could play a crucial role in treatment and care, and potentially aid in reducing mortality. Radiographic imaging methods such as computed tomography (CT), chest X-rays (CXRs), and echo ultrasound (US) are widely used in screening and diagnosis. Research on using image-based AI and machine learning (ML) methods can help in rapid assessment, serve as surrogates for expert assessment, and reduce variability in human performance. In this Special Issue, “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”, we have highlighted exemplary primary research studies and literature reviews focusing on novel AI/ML methods and their application in image-based screening, diagnosis, and clinical management of cardiopulmonary diseases. We hope that these articles will help establish the advancements in AI
Eigenimage Processing of Frontal Chest Radiographs
The goal of this research was to improve the speed and accuracy of reporting by clinical radiologists. By applying a technique known as eigenimage processing to chest radiographs, abnormal findings were enhanced and a classification scheme developed. Results confirm that the method is feasible for clinical use. Eigenimage processing is a popular face recognition routine that has only recently been applied to medical images, but it has not previously been applied to full size radiographs. Chest radiographs were chosen for this research because they are clinically important and are challenging to process due to their large data content. It is hoped that the success with these images will enable future work on other medical images such as those from CT and MRI. Eigenimage processing is based on a multivariate statistical method which identifies patterns of variance within a training set of images. Specifically it involves the application of a statistical technique called principal components analysis to a training set. For this research, the training set was a collection of 77 normal radiographs. This processing produced a set of basis images, known as eigenimages, that best describe the variance within the training set of normal images. For chest radiographs the basis images may also be referred to as 'eigenchests'. Images to be tested were described in terms of eigenimages. This identified patterns of variance likely to be normal. A new image, referred to as the remainder image, was derived by removing patterns of normal variance, thus making abnormal patterns of variance more conspicuous. The remainder image could either be presented to clinicians or used as part of a computer aided diagnosis system. For the image sets used, the discriminatory power of a classification scheme approached 90%. While the processing of the training set required significant computation time, each test image to be classified or enhanced required only a few seconds to process. Thus the system could be integrated into a clinical radiology department
Computer Aided Diagnosis of Macular Edema Using Color Fundus Images: A Review
ABSTRACT Diabetic retinopathy is the leading cause of blindness in the western working age population and micro aneurysms are one of the first pathologies associated with diabetic retinopathy. Diabetic retinopathy (DR) is caused by damage to the blood vessels of the retina which affects the vision. But when DR becomes severe it results into macular edema. Macula is the region near the centre of the eye that provides the vision. Blood vessels leak fluid onto the macula leading to the swelling which blurs the vision eventually leading to complete loss of vision. This paper is based on the detection of the edema affected image from the normal image. If the image is edema affected it also states its severity of the disease using a rotational asymmetry metric by examining the symmetry of the macular region. Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images
Computational methods for the analysis of functional 4D-CT chest images.
Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
A computer aided diagnosis system for lung nodules detection in postero anterior chest radiographs
This thesis describes a Computer Aided System aimed at lung nodules detection.
The fully automatized method developed to search for nodules is
composed by four steps. They are the segmentation of the lung field, the
enhancement of the image, the extraction of the candidate regions, and the
selection between them of the regions with the highest chance to be True
Positives. The steps of segmentation, enhancement and candidates extraction
are based on multi-scale analysis. The common assumption underlying
their development is that the signal representing the details to be detected
by each of them (lung borders or nodule regions) is composed by a mixture
of more simple signals belonging to different scales and level of details.
The last step of candidate region classification is the most complicate; its
8
task is to discern among a high number of candidate regions, the few True
Positives. To this aim several features and different classifiers have been
investigated.
In Chapter 1 the segmentation algorithm is described; the algorithm has
been tested on the images of two different databases, the JSRT and the
Niguarda database, both described in the next section, for a total of 409
images. We compared the results obtained with another method presented
in the literature and described by Ginneken, in [85], as the one obtaining
the best performance at the state of the art; it has been tested on the same
images of the JSRT database. No errors have been detected in the results
obtained by our method, meanwhile the one previously mentioned produced
an overall number of error equal to 50. Also the results obtained on the
images of the Niguarda database confirmed the efficacy of the system realized,
allowing us to say that this is the best method presented so far in
the literature. This sentence is based also on the fact that this is the only
system tested on such an amount of images, and they are belonging to two
different databases.
Chapter 2 is aimed at the description of the multi-scale enhancement and
the extraction methods.
The enhancement allows to produce an image where the \u201cconspicuity\u201d of
nodules is increased, so that nodules of different sizes and located in parts
of the lungs characterized by completely different anatomic noise are more
visible. Based on the same assumption the candidates extraction procedure,
described in the same chapter, employs a multi-scale method to detect all
the nodules of different sizes. Also this step has been compared with two
methods ([8] and [1]) described in the literature and tested on the same
images. Our implementation of the first one of them ([8]) produced really
poor results; the second one obtained a sensitivity ratio (See Appendix C
for its definition) equal to 86%. The considerably better performance of our
method is proved by the fact that the sensitivity ratio we obtained is much
higher (it is equal to 97%) and also the number of False positives detected
is much less.
The experiments aimed at the classification of the candidates are described
in chapter 3; both a rule based technique and 2 learning systems, the Multi
Layer Perceptron (MLP) and the Support Vector Machine (SVM), have
been investigated. Their input is a set of 16 features. The rule based system
obtained the best performance: the cardinality of the set of candidates left is
highly reduced without lowering the sensitivity of the system, since no True
Positive region is lost. It can be added that this performance is much better
than the one of the system used by Ginneken and Schilam in [1], since its
sensitivity is lower (equal to 77%) and the number of False Positive left is
comparable. The drawback of a rule based system is the need of setting the
9
thresholds used by the rules; since they are experimentally set the system is
dependent on the images used to develop it. Therefore it may happen that,
on different databases, the performance could not be so good.
The result of the MLPs and of the SVMs are described in detail and the
ROC analysis is also reported, regarding the experiments performed with
the SVMs.
Furthermore, the attempt to improve the performance of the classification
leaded to other experiments employing SVMs trained with more complicate
feature sets. The results obtained, since not better than the previous,
showed the need of a proper selection of the features. Future works will then
be focused at testing other sets of features, and their combination obtained
by means of proper feature selection techniques
- …