1,545 research outputs found
Informative sample generation using class aware generative adversarial networks for classification of chest Xrays
Training robust deep learning (DL) systems for disease detection from medical
images is challenging due to limited images covering different disease types
and severity. The problem is especially acute, where there is a severe class
imbalance. We propose an active learning (AL) framework to select most
informative samples for training our model using a Bayesian neural network.
Informative samples are then used within a novel class aware generative
adversarial network (CAGAN) to generate realistic chest xray images for data
augmentation by transferring characteristics from one class label to another.
Experiments show our proposed AL framework is able to achieve state-of-the-art
performance by using about of the full dataset, thus saving significant
time and effort over conventional methods
Semi-Supervised Semantic Segmentation Methods for UW-OCTA Diabetic Retinopathy Grade Assessment
People with diabetes are more likely to develop diabetic retinopathy (DR)
than healthy people. However, DR is the leading cause of blindness. At present,
the diagnosis of diabetic retinopathy mainly relies on the experienced
clinician to recognize the fine features in color fundus images. This is a
time-consuming task. Therefore, in this paper, to promote the development of
UW-OCTA DR automatic detection, we propose a novel semi-supervised semantic
segmentation method for UW-OCTA DR image grade assessment. This method, first,
uses the MAE algorithm to perform semi-supervised pre-training on the UW-OCTA
DR grade assessment dataset to mine the supervised information in the UW-OCTA
images, thereby alleviating the need for labeled data. Secondly, to more fully
mine the lesion features of each region in the UW-OCTA image, this paper
constructs a cross-algorithm ensemble DR tissue segmentation algorithm by
deploying three algorithms with different visual feature processing strategies.
The algorithm contains three sub-algorithms, namely pre-trained MAE, ConvNeXt,
and SegFormer. Based on the initials of these three sub-algorithms, the
algorithm can be named MCS-DRNet. Finally, we use the MCS-DRNet algorithm as an
inspector to check and revise the results of the preliminary evaluation of the
DR grade evaluation algorithm. The experimental results show that the mean dice
similarity coefficient of MCS-DRNet v1 and v2 are 0.5161 and 0.5544,
respectively. The quadratic weighted kappa of the DR grading evaluation is
0.7559. Our code will be released soon
Histopathological image analysis : a review
Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe
QuantiMus: A Machine Learning-Based Approach for High Precision Analysis of Skeletal Muscle Morphology.
Skeletal muscle injury provokes a regenerative response, characterized by the de novo generation of myofibers that are distinguished by central nucleation and re-expression of developmentally restricted genes. In addition to these characteristics, myofiber cross-sectional area (CSA) is widely used to evaluate muscle hypertrophic and regenerative responses. Here, we introduce QuantiMus, a free software program that uses machine learning algorithms to quantify muscle morphology and molecular features with high precision and quick processing-time. The ability of QuantiMus to define and measure myofibers was compared to manual measurement or other automated software programs. QuantiMus rapidly and accurately defined total myofibers and measured CSA with comparable performance but quantified the CSA of centrally-nucleated fibers (CNFs) with greater precision compared to other software. It additionally quantified the fluorescence intensity of individual myofibers of human and mouse muscle, which was used to assess the distribution of myofiber type, based on the myosin heavy chain isoform that was expressed. Furthermore, analysis of entire quadriceps cross-sections of healthy and mdx mice showed that dystrophic muscle had an increased frequency of Evans blue dye+ injured myofibers. QuantiMus also revealed that the proportion of centrally nucleated, regenerating myofibers that express embryonic myosin heavy chain (eMyHC) or neural cell adhesion molecule (NCAM) were increased in dystrophic mice. Our findings reveal that QuantiMus has several advantages over existing software. The unique self-learning capacity of the machine learning algorithms provides superior accuracy and the ability to rapidly interrogate the complete muscle section. These qualities increase rigor and reproducibility by avoiding methods that rely on the sampling of representative areas of a section. This is of particular importance for the analysis of dystrophic muscle given the "patchy" distribution of muscle pathology. QuantiMus is an open source tool, allowing customization to meet investigator-specific needs and provides novel analytical approaches for quantifying muscle morphology
On Classification with Bags, Groups and Sets
Many classification problems can be difficult to formulate directly in terms
of the traditional supervised setting, where both training and test samples are
individual feature vectors. There are cases in which samples are better
described by sets of feature vectors, that labels are only available for sets
rather than individual samples, or, if individual labels are available, that
these are not independent. To better deal with such problems, several
extensions of supervised learning have been proposed, where either training
and/or test objects are sets of feature vectors. However, having been proposed
rather independently of each other, their mutual similarities and differences
have hitherto not been mapped out. In this work, we provide an overview of such
learning scenarios, propose a taxonomy to illustrate the relationships between
them, and discuss directions for further research in these areas
Computerized Approaches for Retinal Microaneurysm Detection
The number of diabetic patients throughout the world is increasing with a very high rate. The patients suffering from long term diabetes have a very high risk of generating retinal disorder called Diabetic Retinopathy(DR). The disease is a complication of diabetes and may results in irreversible blindness to the patient. Early diagnosis and routine checkups by expert ophthalmologist possibly prevent the vision loss. But the number of people to be screen exceeds the number of experts, especially in rural areas. Thus the computerized screening systems are needed which will accurately screen the large amount of population and identify healthy and diseased people. Thus the workload on experts is reduced significantly. Microaneurysms(MA) are first recognizable signs of DR. Thus early detection of DR requires accurate detection of Microaneurysms. Computerized diagnosis insures reliable and accurate detection of MA's. The paper overviews the approaches for computerized detection of retinal Microaneurysms
Smart Farm-Care using a Deep Learning Model on Mobile Phones
Deep learning and its models have provided exciting solutions in various image processing applications like image segmentation, classification, labeling, etc., which paved the way to apply these models in agriculture to identify diseases in agricultural plants. The most visible symptoms of the disease initially appear on the leaves. To identify diseases found in leaf images, an accurate classification system with less size and complexity is developed using smartphones. A labeled dataset consisting of 3171 apple leaf images belonging to 4 different classes of diseases, including the healthy ones, is used for classification. In this work, four variants of MobileNet models - pre-trained on the ImageNet database, are retrained to diagnose diseases. The model’s variants differ based on their depth and resolution multiplier. The results show that the proposed model with 0.5 depth and 224 resolution performs well - achieving an accuracy of 99.6%. Later, the K-means algorithm is used to extract additional features, which helps improve the accuracy to 99.7% and also measures the number of pixels forming diseased spots, which helps in severity prediction. Doi: 10.28991/ESJ-2023-07-02-013 Full Text: PD
Current and future roles of artificial intelligence in retinopathy of prematurity
Retinopathy of prematurity (ROP) is a severe condition affecting premature
infants, leading to abnormal retinal blood vessel growth, retinal detachment,
and potential blindness. While semi-automated systems have been used in the
past to diagnose ROP-related plus disease by quantifying retinal vessel
features, traditional machine learning (ML) models face challenges like
accuracy and overfitting. Recent advancements in deep learning (DL), especially
convolutional neural networks (CNNs), have significantly improved ROP detection
and classification. The i-ROP deep learning (i-ROP-DL) system also shows
promise in detecting plus disease, offering reliable ROP diagnosis potential.
This research comprehensively examines the contemporary progress and challenges
associated with using retinal imaging and artificial intelligence (AI) to
detect ROP, offering valuable insights that can guide further investigation in
this domain. Based on 89 original studies in this field (out of 1487 studies
that were comprehensively reviewed), we concluded that traditional methods for
ROP diagnosis suffer from subjectivity and manual analysis, leading to
inconsistent clinical decisions. AI holds great promise for improving ROP
management. This review explores AI's potential in ROP detection,
classification, diagnosis, and prognosis.Comment: 28 pages, 8 figures, 2 tables, 235 references, 1 supplementary tabl
Automatic detection of pathological regions in medical images
Medical images are an essential tool in the daily clinical routine for the detection, diagnosis, and monitoring of diseases. Different imaging modalities such as magnetic resonance (MR) or X-ray imaging are used to visualize the manifestations of various diseases, providing physicians with valuable information. However, analyzing every single image by human experts is a tedious and laborious task. Deep learning methods have shown great potential to support this process, but many images are needed to train reliable neural networks. Besides the accuracy of the final method, the interpretability of the results is crucial for a deep learning method to be established. A fundamental problem in the medical field is the availability of sufficiently large datasets due to the variability of different imaging techniques and their configurations.
The aim of this thesis is the development of deep learning methods for the automatic identification of anomalous regions in medical images. Each method is tailored to the amount and type of available data. In the first step, we present a fully supervised segmentation method based on denoising diffusion models. This requires a large dataset with pixel-wise manual annotations of the pathological regions. Due to the implicit ensemble characteristic, our method provides uncertainty maps to allow interpretability of the model’s decisions. Manual pixel-wise annotations face the problems that they are prone to human bias, hard to obtain, and often even unavailable. Weakly supervised methods avoid these issues by only relying on image-level annotations. We present two different approaches based on generative models to generate pixel-wise anomaly maps using only image-level annotations, i.e., a generative adversarial network and a denoising diffusion model. Both perform image-to-image translation between a set of healthy and a set of diseased subjects. Pixel-wise anomaly maps can be obtained by computing the difference between the original image of the diseased subject and the synthetic image of its healthy representation. In an extension of the diffusion-based anomaly detection method, we present a flexible framework to solve various image-to-image translation tasks. With this method, we managed to change the size of tumors in MR images, and we were able to add realistic pathologies to images of healthy subjects.
Finally, we focus on a problem frequently occurring when working with MR images: If not enough data from one MR scanner are available, data from other scanners need to be considered. This multi-scanner setting introduces a bias between the datasets of different scanners, limiting the performance of deep learning models. We present a regularization strategy on the model’s latent space to overcome the problems raised by this multi-site setting
- …