569 research outputs found
Active Learning on Medical Image
The development of medical science greatly depends on the increased
utilization of machine learning algorithms. By incorporating machine learning,
the medical imaging field can significantly improve in terms of the speed and
accuracy of the diagnostic process. Computed tomography (CT), magnetic
resonance imaging (MRI), X-ray imaging, ultrasound imaging, and positron
emission tomography (PET) are the most commonly used types of imaging data in
the diagnosis process, and machine learning can aid in detecting diseases at an
early stage. However, training machine learning models with limited annotated
medical image data poses a challenge. The majority of medical image datasets
have limited data, which can impede the pattern-learning process of
machine-learning algorithms. Additionally, the lack of labeled data is another
critical issue for machine learning. In this context, active learning
techniques can be employed to address the challenge of limited annotated
medical image data. Active learning involves iteratively selecting the most
informative samples from a large pool of unlabeled data for annotation by
experts. By actively selecting the most relevant and informative samples,
active learning reduces the reliance on large amounts of labeled data and
maximizes the model's learning capacity with minimal human labeling effort. By
incorporating active learning into the training process, medical imaging
machine learning models can make more efficient use of the available labeled
data, improving their accuracy and performance. This approach allows medical
professionals to focus their efforts on annotating the most critical cases,
while the machine learning model actively learns from these annotated samples
to improve its diagnostic capabilities.Comment: 12 pages, 8 figures; Acceptance of the chapter for the Springer book
"Data-driven approaches to medical imaging
The State of Applying Artificial Intelligence to Tissue Imaging for Cancer Research and Early Detection
Artificial intelligence represents a new frontier in human medicine that
could save more lives and reduce the costs, thereby increasing accessibility.
As a consequence, the rate of advancement of AI in cancer medical imaging and
more particularly tissue pathology has exploded, opening it to ethical and
technical questions that could impede its adoption into existing systems. In
order to chart the path of AI in its application to cancer tissue imaging, we
review current work and identify how it can improve cancer pathology
diagnostics and research. In this review, we identify 5 core tasks that models
are developed for, including regression, classification, segmentation,
generation, and compression tasks. We address the benefits and challenges that
such methods face, and how they can be adapted for use in cancer prevention and
treatment. The studies looked at in this paper represent the beginning of this
field and future experiments will build on the foundations that we highlight
AI in Medical Imaging Informatics: Current Challenges and Future Directions
This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine
Review of photoacoustic imaging plus X
Photoacoustic imaging (PAI) is a novel modality in biomedical imaging
technology that combines the rich optical contrast with the deep penetration of
ultrasound. To date, PAI technology has found applications in various
biomedical fields. In this review, we present an overview of the emerging
research frontiers on PAI plus other advanced technologies, named as PAI plus
X, which includes but not limited to PAI plus treatment, PAI plus new circuits
design, PAI plus accurate positioning system, PAI plus fast scanning systems,
PAI plus novel ultrasound sensors, PAI plus advanced laser sources, PAI plus
deep learning, and PAI plus other imaging modalities. We will discuss each
technology's current state, technical advantages, and prospects for
application, reported mostly in recent three years. Lastly, we discuss and
summarize the challenges and potential future work in PAI plus X area
Eight pruning deep learning models for low storage and high-speed COVID-19 computed tomography lung segmentation and heatmap-based lesion localization: A multicenter study using COVLIAS 2.0.
COVLIAS 1.0: an automated lung segmentation was designed for COVID-19 diagnosis. It has issues related to storage space and speed. This study shows that COVLIAS 2.0 uses pruned AI (PAI) networks for improving both storage and speed, wiliest high performance on lung segmentation and lesion localization.ology: The proposed study uses multicenter ∼9,000 CT slices from two different nations, namely, CroMed from Croatia (80 patients, experimental data), and NovMed from Italy (72 patients, validation data). We hypothesize that by using pruning and evolutionary optimization algorithms, the size of the AI models can be reduced significantly, ensuring optimal performance. Eight different pruning techniques (i) differential evolution (DE), (ii) genetic algorithm (GA), (iii) particle swarm optimization algorithm (PSO), and (iv) whale optimization algorithm (WO) in two deep learning frameworks (i) Fully connected network (FCN) and (ii) SegNet were designed. COVLIAS 2.0 was validated using "Unseen NovMed" and benchmarked against MedSeg. Statistical tests for stability and reliability were also conducted.Pruning algorithms (i) FCN-DE, (ii) FCN-GA, (iii) FCN-PSO, and (iv) FCN-WO showed improvement in storage by 92.4%, 95.3%, 98.7%, and 99.8% respectively when compared against solo FCN, and (v) SegNet-DE, (vi) SegNet-GA, (vii) SegNet-PSO, and (viii) SegNet-WO showed improvement by 97.1%, 97.9%, 98.8%, and 99.2% respectively when compared against solo SegNet. AUC > 0.94 (p 0.86 (p < 0.0001) on NovMed data set for all eight EA model. PAI <0.25 s per image. DenseNet-121-based Grad-CAM heatmaps showed validation on glass ground opacity lesions.Eight PAI networks that were successfully validated are five times faster, storage efficient, and could be used in clinical settings
- …