8,073 research outputs found
Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation
Image annotation aims to annotate a given image with a variable number of
class labels corresponding to diverse visual concepts. In this paper, we
address two main issues in large-scale image annotation: 1) how to learn a rich
feature representation suitable for predicting a diverse set of visual concepts
ranging from object, scene to abstract concept; 2) how to annotate an image
with the optimal number of class labels. To address the first issue, we propose
a novel multi-scale deep model for extracting rich and discriminative features
capable of representing a wide range of visual concepts. Specifically, a novel
two-branch deep neural network architecture is proposed which comprises a very
deep main network branch and a companion feature fusion network branch designed
for fusing the multi-scale features computed from the main branch. The deep
model is also made multi-modal by taking noisy user-provided tags as model
input to complement the image input. For tackling the second issue, we
introduce a label quantity prediction auxiliary task to the main label
prediction task to explicitly estimate the optimal label number for a given
image. Extensive experiments are carried out on two large-scale image
annotation benchmark datasets and the results show that our method
significantly outperforms the state-of-the-art.Comment: Submited to IEEE TI
Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification
Convolutional Neural Networks (CNN) are state-of-the-art models for many
image classification tasks. However, to recognize cancer subtypes
automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images
(WSI) is currently computationally impossible. The differentiation of cancer
subtypes is based on cellular-level visual features observed on image patch
scale. Therefore, we argue that in this situation, training a patch-level
classifier on image patches will perform better than or similar to an
image-level classifier. The challenge becomes how to intelligently combine
patch-level classification results and model the fact that not all patches will
be discriminative. We propose to train a decision fusion model to aggregate
patch-level predictions given by patch-level CNNs, which to the best of our
knowledge has not been shown before. Furthermore, we formulate a novel
Expectation-Maximization (EM) based method that automatically locates
discriminative patches robustly by utilizing the spatial relationships of
patches. We apply our method to the classification of glioma and non-small-cell
lung carcinoma cases into subtypes. The classification accuracy of our method
is similar to the inter-observer agreement between pathologists. Although it is
impossible to train CNNs on WSIs, we experimentally demonstrate using a
comparable non-cancer dataset of smaller images that a patch-based CNN can
outperform an image-based CNN
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
One-Class Classification: Taxonomy of Study and Review of Techniques
One-class classification (OCC) algorithms aim to build classification models
when the negative class is either absent, poorly sampled or not well defined.
This unique situation constrains the learning of efficient classifiers by
defining class boundary just with the knowledge of positive class. The OCC
problem has been considered and applied under many research themes, such as
outlier/novelty detection and concept learning. In this paper we present a
unified view of the general problem of OCC by presenting a taxonomy of study
for OCC problems, which is based on the availability of training data,
algorithms used and the application domains applied. We further delve into each
of the categories of the proposed taxonomy and present a comprehensive
literature review of the OCC algorithms, techniques and methodologies with a
focus on their significance, limitations and applications. We conclude our
paper by discussing some open research problems in the field of OCC and present
our vision for future research.Comment: 24 pages + 11 pages of references, 8 figure
- …