57 research outputs found
Transfer Learning for Domain Adaptation in MRI: Application in Brain Lesion Segmentation
Magnetic Resonance Imaging (MRI) is widely used in routine clinical diagnosis
and treatment. However, variations in MRI acquisition protocols result in
different appearances of normal and diseased tissue in the images.
Convolutional neural networks (CNNs), which have shown to be successful in many
medical image analysis tasks, are typically sensitive to the variations in
imaging protocols. Therefore, in many cases, networks trained on data acquired
with one MRI protocol, do not perform satisfactorily on data acquired with
different protocols. This limits the use of models trained with large annotated
legacy datasets on a new dataset with a different domain which is often a
recurring situation in clinical settings. In this study, we aim to answer the
following central questions regarding domain adaptation in medical image
analysis: Given a fitted legacy model, 1) How much data from the new domain is
required for a decent adaptation of the original network?; and, 2) What portion
of the pre-trained model parameters should be retrained given a certain number
of the new domain training samples? To address these questions, we conducted
extensive experiments in white matter hyperintensity segmentation task. We
trained a CNN on legacy MR images of brain and evaluated the performance of the
domain-adapted network on the same task with images from a different domain. We
then compared the performance of the model to the surrogate scenarios where
either the same trained network is used or a new network is trained from
scratch on the new dataset.The domain-adapted network tuned only by two
training examples achieved a Dice score of 0.63 substantially outperforming a
similar network trained on the same set of examples from scratch.Comment: 8 pages, 3 figure
3D Deep Learning on Medical Images: A Review
The rapid advancements in machine learning, graphics processing technologies
and availability of medical imaging data has led to a rapid increase in use of
deep learning models in the medical domain. This was exacerbated by the rapid
advancements in convolutional neural network (CNN) based architectures, which
were adopted by the medical imaging community to assist clinicians in disease
diagnosis. Since the grand success of AlexNet in 2012, CNNs have been
increasingly used in medical image analysis to improve the efficiency of human
clinicians. In recent years, three-dimensional (3D) CNNs have been employed for
analysis of medical images. In this paper, we trace the history of how the 3D
CNN was developed from its machine learning roots, give a brief mathematical
description of 3D CNN and the preprocessing steps required for medical images
before feeding them to 3D CNNs. We review the significant research in the field
of 3D medical imaging analysis using 3D CNNs (and its variants) in different
medical areas such as classification, segmentation, detection, and
localization. We conclude by discussing the challenges associated with the use
of 3D CNNs in the medical imaging domain (and the use of deep learning models,
in general) and possible future trends in the field.Comment: 13 pages, 4 figures, 2 table
G2C: A Generator-to-Classifier Framework Integrating Multi-Stained Visual Cues for Pathological Glomerulus Classification
Pathological glomerulus classification plays a key role in the diagnosis of
nephropathy. As the difference between different subcategories is subtle,
doctors often refer to slides from different staining methods to make
decisions. However, creating correspondence across various stains is
labor-intensive, bringing major difficulties in collecting data and training a
vision-based algorithm to assist nephropathy diagnosis. This paper provides an
alternative solution for integrating multi-stained visual cues for glomerulus
classification. Our approach, named generator-to-classifier (G2C), is a
two-stage framework. Given an input image from a specified stain, several
generators are first applied to estimate its appearances in other staining
methods, and a classifier follows to combine visual cues from different stains
for prediction (whether it is pathological, or which type of pathology it has).
We optimize these two stages in a joint manner. To provide a reasonable
initialization, we pre-train the generators in an unlabeled reference set under
an unpaired image-to-image translation task, and then fine-tune them together
with the classifier. We conduct experiments on a glomerulus type classification
dataset collected by ourselves (there are no publicly available datasets for
this purpose). Although joint optimization slightly harms the authenticity of
the generated patches, it boosts classification performance, suggesting more
effective visual cues are extracted in an automatic way. We also transfer our
model to a public dataset for breast cancer classification, and outperform the
state-of-the-arts significantly.Comment: Accepted by AAAI 201
Epidural akut hematomlar için akıllı erken uyarı sistemi
Epidural hematoma (EAH) is the accumulation of blood in the space between the outer membrane of the brain (dura mater) and the bone. Acute subdural and epidural hematoma appears on CT scan as a hyper-dense collection often located in brain convexity. Such bleeding can become fatal by increasing intracranial pressure and creating a mass effect. Therefore, it is very important to recognize these bleedings promptly in an emergency trauma setting. Thus, early diagnosis is essential to reduce mortality and morbidityratesin these cases. There has been a growing interest in artificial intelligence (AI) and machine learning (ML) algorithms for diagnostics in medical fields. In this study, a supervised learning method was used in which the decision tree ML algorithm is trained with the patients'statuses(EAH or Normal). This study proposes an early warning system (EWS) that scans all cranial CTs obtained at the trauma center. The EWS in this study, trained with CT scans from about 100 patients, can predict EAH with 100% accuracy usingimage recognition and supervised learning algorithms. Each MR section obtained for each patient is individually analyzedbyimage processing and EAH detection is made. For this, the decision tree method, which is a supervised learning algorithm, was trained and used to detect EAH in MR sections. The algorithm has been developed in such a way that it will immediately alert the emergency physician and consultant neurosurgeon by e-mail when it detects EAH in more than 10 sections in any patient.Epidural hematom (EAH), beynin dış zarı (dura mater) ile kemik arasındaki potansiyel boşlukta kan birikmesidir. Akut subdural ve epidural hematom, BT taramasında genellikle beyin konveksitesinde yer alan hiper yoğun bir koleksiyon olarak görünür. Bu tür kanamalar kafa içi basıncını artırarak ve kitle etkisi yaratarak ölümcül hale gelebilir. Bu nedenle, acil travma ortamında bu kanamaların derhal tanınması çok önemlidir. Bu nedenle bu vakalarda mortalite ve morbiditeyi düşürmek için erken tanı şarttır. Tıbbi alanlarda teşhis için yapay zeka (AI) ve makine öğrenimi (ML) algoritmalarına son zamanlarda artan bir ilgi vardır. Bu çalışmada, karar ağacı ML algoritmasının hastaların durumlarıyla (EAH veya Normal) eğitildiği denetimli bir öğrenme yöntemi kullanılmıştır. Bu çalışma, travma merkezinde elde edilen tüm kraniyal BT'leri tarayan bir erken uyarı sistemi (EWS) önermektedir. Bu çalışmadaki EWS, yaklaşık 100 hastadan alınan CT taramaları ile eğitilmiştir, görüntü tanıma ve denetimli öğrenme algoritmaları ile%100 doğrulukla EAH'yi tahmin edebilir.Her hasta için elde edilen her MR kesiti teker teker görüntü işleme analizinden geçirilir ve EAH tespiti yapılır. Bunun için bir denetimli öğrenme algoritması olan karar ağacı yöntemi eğitilerek MR kesitlerinde EAH saptaması için kullanılmıştır. Algoritma herhangi bir hastada 10’dan fazla kesitte EAH tespit ettiğindeacil durum hekimine ve danışman beyin cerrahına e-posta ile anında uyarı verecek şekilde geliştirilmiştir
- …