3,949 research outputs found
Toward a Taxonomy and Computational Models of Abnormalities in Images
The human visual system can spot an abnormal image, and reason about what
makes it strange. This task has not received enough attention in computer
vision. In this paper we study various types of atypicalities in images in a
more comprehensive way than has been done before. We propose a new dataset of
abnormal images showing a wide range of atypicalities. We design human subject
experiments to discover a coarse taxonomy of the reasons for abnormality. Our
experiments reveal three major categories of abnormality: object-centric,
scene-centric, and contextual. Based on this taxonomy, we propose a
comprehensive computational model that can predict all different types of
abnormality in images and outperform prior arts in abnormality recognition.Comment: To appear in the Thirtieth AAAI Conference on Artificial Intelligence
(AAAI 2016
PadChest: A large chest x-ray image dataset with multi-label annotated reports
We present a labeled large-scale, high resolution chest x-ray dataset for the
automated exploration of medical images along with their associated reports.
This dataset includes more than 160,000 images obtained from 67,000 patients
that were interpreted and reported by radiologists at Hospital San Juan
Hospital (Spain) from 2009 to 2017, covering six different position views and
additional information on image acquisition and patient demography. The reports
were labeled with 174 different radiographic findings, 19 differential
diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and
mapped onto standard Unified Medical Language System (UMLS) terminology. Of
these reports, 27% were manually annotated by trained physicians and the
remaining set was labeled using a supervised method based on a recurrent neural
network with attention mechanisms. The labels generated were then validated in
an independent test set achieving a 0.93 Micro-F1 score. To the best of our
knowledge, this is one of the largest public chest x-ray database suitable for
training supervised models concerning radiographs, and the first to contain
radiographic reports in Spanish. The PadChest dataset can be downloaded from
http://bimcv.cipf.es/bimcv-projects/padchest/
Deep Learning for Brain Age Estimation: A Systematic Review
Over the years, Machine Learning models have been successfully employed on
neuroimaging data for accurately predicting brain age. Deviations from the
healthy brain aging pattern are associated to the accelerated brain aging and
brain abnormalities. Hence, efficient and accurate diagnosis techniques are
required for eliciting accurate brain age estimations. Several contributions
have been reported in the past for this purpose, resorting to different
data-driven modeling methods. Recently, deep neural networks (also referred to
as deep learning) have become prevalent in manifold neuroimaging studies,
including brain age estimation. In this review, we offer a comprehensive
analysis of the literature related to the adoption of deep learning for brain
age estimation with neuroimaging data. We detail and analyze different deep
learning architectures used for this application, pausing at research works
published to date quantitatively exploring their application. We also examine
different brain age estimation frameworks, comparatively exposing their
advantages and weaknesses. Finally, the review concludes with an outlook
towards future directions that should be followed by prospective studies. The
ultimate goal of this paper is to establish a common and informed reference for
newcomers and experienced researchers willing to approach brain age estimation
by using deep learning model
A Decision-Making Tool for Early Detection of Breast Cancer on Mammographic Images
Breast cancer is one of the most dangerous types of cancer in the world among females. In the medical industry, the early detection of a breast abnormality in a mammogram can significantly decrease the death rate caused by breast cancer. Therefore, researchers directed their focus and efforts to find better solutions. Whereas researchers earlier used semi-automatic algorithms of machine learning, recently the attention is redirected toward deep learning algorithms that automatically extract features. Therefore, in the research study, two pre-trained Convolutional Neural Network models, VGG16 and ResNet50, have been used and applied on mammogram images to classify their abnormalities in terms of (1) the Benign Calcification, (2) the Malignant Calcification, (3) the Benign Mass, and (4) the Malignant Mass. The mammographic images of the CBIS-DDSM dataset are used. In the training phase, various experiments are performed on ROI images to decide on the best model configuration and fine-tuning depth. The experimental results showed that the VGG16 model provided a remarkable advancement over the ResNet50 model; the accuracy obtained was 80.0% in the first model whereas the second model could classify with a 60.0% accuracy almost randomly. Apart from accuracy, the other performance metrics used in this study are precision, recall, F1-Score and AUC. Our evaluation, based on these performance metrics, shows that accurate detection effect is obtained from the two networks with VGG16 being the most accurate. Finally, a decision support tool is developed which classifies the full mammogram images based on the fine-tuned VGG16 architecture into Benign Calcification, Malignant Calcification, Benign Mass, and Malignant Mass
A Survey of Multimodal Information Fusion for Smart Healthcare: Mapping the Journey from Data to Wisdom
Multimodal medical data fusion has emerged as a transformative approach in
smart healthcare, enabling a comprehensive understanding of patient health and
personalized treatment plans. In this paper, a journey from data to information
to knowledge to wisdom (DIKW) is explored through multimodal fusion for smart
healthcare. We present a comprehensive review of multimodal medical data fusion
focused on the integration of various data modalities. The review explores
different approaches such as feature selection, rule-based systems, machine
learning, deep learning, and natural language processing, for fusing and
analyzing multimodal data. This paper also highlights the challenges associated
with multimodal fusion in healthcare. By synthesizing the reviewed frameworks
and theories, it proposes a generic framework for multimodal medical data
fusion that aligns with the DIKW model. Moreover, it discusses future
directions related to the four pillars of healthcare: Predictive, Preventive,
Personalized, and Participatory approaches. The components of the comprehensive
survey presented in this paper form the foundation for more successful
implementation of multimodal fusion in smart healthcare. Our findings can guide
researchers and practitioners in leveraging the power of multimodal fusion with
the state-of-the-art approaches to revolutionize healthcare and improve patient
outcomes.Comment: This work has been submitted to the ELSEVIER for possible
publication. Copyright may be transferred without notice, after which this
version may no longer be accessibl
Developing advanced mathematical models for detecting abnormalities in 2D/3D medical structures.
Detecting abnormalities in two-dimensional (2D) and three-dimensional (3D) medical structures is among the most interesting and challenging research areas in the medical imaging field. Obtaining the desired accurate automated quantification of abnormalities in medical structures is still very challenging. This is due to a large and constantly growing number of different objects of interest and associated abnormalities, large variations of their appearances and shapes in images, different medical imaging modalities, and associated changes of signal homogeneity and noise for each object. The main objective of this dissertation is to address these problems and to provide proper mathematical models and techniques that are capable of analyzing low and high resolution medical data and providing an accurate, automated analysis of the abnormalities in medical structures in terms of their area/volume, shape, and associated abnormal functionality. This dissertation presents different preliminary mathematical models and techniques that are applied in three case studies: (i) detecting abnormal tissue in the left ventricle (LV) wall of the heart from delayed contrast-enhanced cardiac magnetic resonance images (MRI), (ii) detecting local cardiac diseases based on estimating the functional strain metric from cardiac cine MRI, and (iii) identifying the abnormalities in the corpus callosum (CC) brain structure—the largest fiber bundle that connects the two hemispheres in the brain—for subjects that suffer from developmental brain disorders. For detecting the abnormal tissue in the heart, a graph-cut mathematical optimization model with a cost function that accounts for the object’s visual appearance and shape is used to segment the the inner cavity. The model is further integrated with a geometric model (i.e., a fast marching level set model) to segment the outer border of the myocardial wall (the LV). Then the abnormal tissue in the myocardium wall (also called dead tissue, pathological tissue, or infarct area) is identified based on a joint Markov-Gibbs random field (MGRF) model of the image and its region (segmentation) map that accounts for the pixel intensities and the spatial interactions between the pixels. Experiments with real in-vivo data and comparative results with ground truth (identified by a radiologist) and other approaches showed that the proposed framework can accurately detect the pathological tissue and can provide useful metrics for radiologists and clinicians. To estimate the strain from cardiac cine MRI, a novel method based on tracking the LV wall geometry is proposed. To achieve this goal, a partial differential equation (PDE) method is applied to track the LV wall points by solving the Laplace equation between the LV contours of each two successive image frames over the cardiac cycle. The main advantage of the proposed tracking method over traditional texture-based methods is its ability to track the movement and rotation of the LV wall based on tracking the geometric features of the inner, mid-, and outer walls of the LV. This overcomes noise sources that come from scanner and heart motion. To identify the abnormalities in the CC from brain MRI, the CCs are aligned using a rigid registration model and are segmented using a shape-appearance model. Then, they are mapped to a simple unified space for analysis. This work introduces a novel cylindrical mapping model, which is conformal (i.e., one to one transformation and bijective), that enables accurate 3D shape analysis of the CC in the cylindrical domain. The framework can detect abnormalities in all divisions of the CC (i.e., splenium, rostrum, genu and body). In addition, it offers a whole 3D analysis of the CC abnormalities instead of only area-based analysis as done by previous groups. The initial classification results based on the centerline length and CC thickness suggest that the proposed CC shape analysis is a promising supplement to the current techniques for diagnosing dyslexia. The proposed techniques in this dissertation have been successfully tested on complex synthetic and MR images and can be used to advantage in many of today’s clinical applications of computer-assisted medical diagnostics and intervention
- …