1,424 research outputs found
Coarse-to-Fine Annotation Enrichment for Semantic Segmentation Learning
Rich high-quality annotated data is critical for semantic segmentation
learning, yet acquiring dense and pixel-wise ground-truth is both labor- and
time-consuming. Coarse annotations (e.g., scribbles, coarse polygons) offer an
economical alternative, with which training phase could hardly generate
satisfactory performance unfortunately. In order to generate high-quality
annotated data with a low time cost for accurate segmentation, in this paper,
we propose a novel annotation enrichment strategy, which expands existing
coarse annotations of training data to a finer scale. Extensive experiments on
the Cityscapes and PASCAL VOC 2012 benchmarks have shown that the neural
networks trained with the enriched annotations from our framework yield a
significant improvement over that trained with the original coarse labels. It
is highly competitive to the performance obtained by using human annotated
dense annotations. The proposed method also outperforms among other
state-of-the-art weakly-supervised segmentation methods.Comment: CIKM 2018 International Conference on Information and Knowledge
Managemen
Weakly supervised segmentation from extreme points
Annotation of medical images has been a major bottleneck for the development
of accurate and robust machine learning models. Annotation is costly and
time-consuming and typically requires expert knowledge, especially in the
medical domain. Here, we propose to use minimal user interaction in the form of
extreme point clicks in order to train a segmentation model that can, in turn,
be used to speed up the annotation of medical images. We use extreme points in
each dimension of a 3D medical image to constrain an initial segmentation based
on the random walker algorithm. This segmentation is then used as a weak
supervisory signal to train a fully convolutional network that can segment the
organ of interest based on the provided user clicks. We show that the network's
predictions can be refined through several iterations of training and
prediction using the same weakly annotated data. Ultimately, our method has the
potential to speed up the generation process of new training datasets for the
development of new machine learning and deep learning-based models for, but not
exclusively, medical image analysis.Comment: Accepted at the MICCAI Workshop for Large-scale Annotation of
Biomedical data and Expert Label Synthesis, Shenzen, China, 201
Semantic multimedia modelling & interpretation for annotation
The emergence of multimedia enabled devices, particularly the incorporation of cameras in mobile phones, and the accelerated revolutions in the low cost storage devices, boosts the multimedia data production rate drastically. Witnessing such an iniquitousness of digital images and videos, the research community has been projecting the issue of its significant utilization and management. Stored in monumental multimedia corpora, digital data need to be retrieved and organized in an intelligent way, leaning on the rich semantics involved. The utilization of these image and video collections demands proficient image and video annotation and retrieval techniques. Recently, the multimedia research community is progressively veering its emphasis to the personalization of these media. The main impediment in the image and video analysis is the semantic gap, which is the discrepancy among a user’s high-level interpretation of an image and the video and the low level computational interpretation of it. Content-based image and video annotation systems are remarkably susceptible to the semantic gap due to their reliance on low-level visual features for delineating semantically rich image and video contents. However, the fact is that the visual similarity is not semantic similarity, so there is a demand to break through this dilemma through an alternative way. The semantic gap can be narrowed by counting high-level and user-generated information in the annotation. High-level descriptions of images and or videos are more proficient of capturing the semantic meaning of multimedia content, but it is not always applicable to collect this information. It is commonly agreed that the problem of high level semantic annotation of multimedia is still far from being answered. This dissertation puts forward approaches for intelligent multimedia semantic extraction for high level annotation. This dissertation intends to bridge the gap between the visual features and semantics. It proposes a framework for annotation enhancement and refinement for the object/concept annotated images and videos datasets. The entire theme is to first purify the datasets from noisy keyword and then expand the concepts lexically and commonsensical to fill the vocabulary and lexical gap to achieve high level semantics for the corpus. This dissertation also explored a novel approach for high level semantic (HLS) propagation through the images corpora. The HLS propagation takes the advantages of the semantic intensity (SI), which is the concept dominancy factor in the image and annotation based semantic similarity of the images. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other, while semantic similarity of the images are based on the SI and concept semantic similarity among the pair of images. Moreover, the HLS exploits the clustering techniques to group similar images, where a single effort of the human experts to assign high level semantic to a randomly selected image and propagate to other images through clustering. The investigation has been made on the LabelMe image and LabelMe video dataset. Experiments exhibit that the proposed approaches perform a noticeable improvement towards bridging the semantic gap and reveal that our proposed system outperforms the traditional systems
Socializing the Semantic Gap: A Comparative Survey on Image Tag Assignment, Refinement and Retrieval
Where previous reviews on content-based image retrieval emphasize on what can
be seen in an image to bridge the semantic gap, this survey considers what
people tag about an image. A comprehensive treatise of three closely linked
problems, i.e., image tag assignment, refinement, and tag-based image retrieval
is presented. While existing works vary in terms of their targeted tasks and
methodology, they rely on the key functionality of tag relevance, i.e.
estimating the relevance of a specific tag with respect to the visual content
of a given image and its social context. By analyzing what information a
specific method exploits to construct its tag relevance function and how such
information is exploited, this paper introduces a taxonomy to structure the
growing literature, understand the ingredients of the main works, clarify their
connections and difference, and recognize their merits and limitations. For a
head-to-head comparison between the state-of-the-art, a new experimental
protocol is presented, with training sets containing 10k, 100k and 1m images
and an evaluation on three test sets, contributed by various research groups.
Eleven representative works are implemented and evaluated. Putting all this
together, the survey aims to provide an overview of the past and foster
progress for the near future.Comment: to appear in ACM Computing Survey
A Survey on Label-efficient Deep Image Segmentation: Bridging the Gap between Weak Supervision and Dense Prediction
The rapid development of deep learning has made a great progress in image
segmentation, one of the fundamental tasks of computer vision. However, the
current segmentation algorithms mostly rely on the availability of pixel-level
annotations, which are often expensive, tedious, and laborious. To alleviate
this burden, the past years have witnessed an increasing attention in building
label-efficient, deep-learning-based image segmentation algorithms. This paper
offers a comprehensive review on label-efficient image segmentation methods. To
this end, we first develop a taxonomy to organize these methods according to
the supervision provided by different types of weak labels (including no
supervision, inexact supervision, incomplete supervision and inaccurate
supervision) and supplemented by the types of segmentation problems (including
semantic segmentation, instance segmentation and panoptic segmentation). Next,
we summarize the existing label-efficient image segmentation methods from a
unified perspective that discusses an important question: how to bridge the gap
between weak supervision and dense prediction -- the current methods are mostly
based on heuristic priors, such as cross-pixel similarity, cross-label
constraint, cross-view consistency, and cross-image relation. Finally, we share
our opinions about the future research directions for label-efficient deep
image segmentation.Comment: Accepted to IEEE TPAM
- …