50 research outputs found
Recommended from our members
Human machine collaboration for foreground segmentation in images and videos
Foreground segmentation is defined as the problem of generating pixel level foreground masks for all the objects in a given image or video. Accurate foreground segmentations in images and videos have several potential applications such as improving search, training richer object detectors, image synthesis and re-targeting, scene and activity understanding, video summarization, and post-production video editing.
One effective way to solve this problem is human-machine collaboration. The main idea is to let humans guide the segmentation process through some partial supervision. As humans, we are extremely good at perception and can easily identify the foreground regions. Computers, on the other hand, lack this capability, but are extremely good at continuously processing large volumes of data at the lowest level of detail with great efficiency. Bringing these complementary strengths together can lead to systems which are accurate and cost-effective at the same time. However, in any such human-machine collaboration system, cost effectiveness and higher accuracy are competing goals. While more involvement from humans can certainly lead to higher accuracy, it also leads to increased cost both in terms of time and money. On the other hand, relying more on machines is cost-effective, but algorithms are still nowhere near human-level performance. Balancing this cost versus accuracy trade-off holds the key behind success for such a hybrid system.
In this thesis, I develop foreground segmentation algorithms which effectively and efficiently make use of human guidance for accurately segmenting foreground objects in images and videos. The algorithms developed in this thesis actively reason about the best modalities or interactions through which a user can provide guidance to the system for generating accurate segmentations. At the same time, these algorithms are also capable of prioritizing human guidance on instances where it is most needed. Finally, when structural similarity exists within data (e.g., adjacent frames in a video or similar images in a collection), the algorithms developed in this thesis are capable of propagating information from instances which have received human guidance to the ones which did not. Together, these characteristics result in a substantial savings in human annotation cost while generating high quality foreground segmentations in images and videos.
In this thesis, I consider three categories of segmentation problems all of which can greatly benefit from human-machine collaboration. First, I consider the problem of interactive image segmentation. In traditional interactive methods a human annotator provides a coarse spatial annotation (e.g., bounding box or freehand outlines) around the object of interest to obtain a segmentation. The mode of manual annotation used affects both its accuracy and ease-of-use. Whereas existing methods assume a fixed form of input no matter the image, in this thesis I propose a data-driven algorithm which learns whether an interactive segmentation method will succeed if initialized with a given annotation mode. This allows us to predict the modality that will be sufficiently strong to yield a high quality segmentation for a given image and results in large savings in annotation costs. I also propose a novel interactive segmentation algorithm called Click Carving which can accurately segment objects in images and videos using a very simple form of human interaction---point clicks. It outperforms several state-of-the-art methods and requires only a fraction of human effort in comparison.
Second, I consider the problem of segmenting images in a weakly supervised image collection. Here, we are given a collection of images all belonging to the same object category and the goal is to jointly segment the common object from all the images. For this, I develop a stagewise active approach to segmentation propagation: in each stage, the images that appear most valuable for human annotation are actively determined and labeled by human annotators, then the foreground estimates are revised in all unlabeled images accordingly. In order to identify images that, once annotated, will propagate well to other examples, I introduce an active selection procedure that operates on the joint segmentation graph over all images. It prioritizes human intervention for those images that are uncertain and influential in the graph, while also mutually diverse. Building on this, I also introduce the problem of measuring compatibility between image pairs for joint segmentation. I show that restricting the joint segmentation to only compatible image pairs results in an improved joint segmentation performance.
Finally, I propose a semi-supervised approach for segmentation propagation in video. Given human supervision in some frames of a video, this information can be propagated through time. The main challenge is that the foreground object may move quickly in the scene at the same time its appearance and shape evolves over time. To address this, I propose a higher order supervoxel label consistency potential which leverages bottom-up supervoxels to enforce long-range temporal consistency during propagation. I also introduce the notion of a generic pixel-level objectness in images and videos by training a deep neural network which uses appearance and motion to automatically assign a score to each pixel capturing its likelihood to be an "object" or "background". I show that the human guidance in the semi-supervised propagation algorithm can be further augmented with the generic pixel-objectness scores to obtain an even more accurate foreground segmentation in videos.
Throughout, I provide extensive evaluation on challenging datasets and also compare with many state-of-the-art methods and other baselines validating the strengths of proposed algorithms. The outcomes across several different experiments show that the proposed human-machine collaboration algorithms achieve accurate segmentation of foreground objects in images and videos while saving a large amount of human annotation effort.Computer Science
Optimization for Image Segmentation
Image segmentation, i.e., assigning each pixel a discrete label, is an essential task in computer vision with lots of applications. Major techniques for segmentation include for example Markov Random Field (MRF), Kernel Clustering (KC), and nowadays popular Convolutional Neural Networks (CNN). In this work, we focus on optimization for image segmentation. Techniques like MRF, KC, and CNN optimize MRF energies, KC criteria, or CNN losses respectively, and their corresponding optimization is very different. We are interested in the synergy and the complementary benefits of MRF, KC, and CNN for interactive segmentation and semantic segmentation. Our first contribution is pseudo-bound optimization for binary MRF energies that are high-order or non-submodular. Secondly, we propose Kernel Cut, a novel formulation for segmentation, which combines MRF regularization with Kernel Clustering. We show why to combine KC with MRF and how to optimize the joint objective. In the third part, we discuss how deep CNN segmentation can benefit from non-deep (i.e., shallow) methods like MRF and KC. In particular, we propose regularized losses for weakly-supervised CNN segmentation, in which we can integrate MRF energy or KC criteria as part of the losses. Minimization of regularized losses is a principled approach to semi-supervised learning, in general. Our regularized loss method is very simple and allows different kinds of regularization losses for CNN segmentation. We also study the optimization of regularized losses beyond gradient descent. Our regularized losses approach achieves state-of-the-art accuracy in semantic segmentation with near full supervision quality
Salient Objects in Clutter
This paper identifies and addresses a serious design bias of existing salient
object detection (SOD) datasets, which unrealistically assume that each image
should contain at least one clear and uncluttered salient object. This design
bias has led to a saturation in performance for state-of-the-art SOD models
when evaluated on existing datasets. However, these models are still far from
satisfactory when applied to real-world scenes. Based on our analyses, we
propose a new high-quality dataset and update the previous saliency benchmark.
Specifically, our dataset, called Salient Objects in Clutter~\textbf{(SOC)},
includes images with both salient and non-salient objects from several common
object categories. In addition to object category annotations, each salient
image is accompanied by attributes that reflect common challenges in common
scenes, which can help provide deeper insight into the SOD problem. Further,
with a given saliency encoder, e.g., the backbone network, existing saliency
models are designed to achieve mapping from the training image set to the
training ground-truth set. We, therefore, argue that improving the dataset can
yield higher performance gains than focusing only on the decoder design. With
this in mind, we investigate several dataset-enhancement strategies, including
label smoothing to implicitly emphasize salient boundaries, random image
augmentation to adapt saliency models to various scenarios, and self-supervised
learning as a regularization strategy to learn from small datasets. Our
extensive results demonstrate the effectiveness of these tricks. We also
provide a comprehensive benchmark for SOD, which can be found in our
repository: https://github.com/DengPingFan/SODBenchmark.Comment: 349 references, 20 pages, survey 201 models, benchmark 100 models.
Online benchmark: https://github.com/DengPingFan/SODBenchmar
Deep Semantic Segmentation of Natural and Medical Images: A Review
The semantic image segmentation task consists of classifying each pixel of an
image into an instance, where each instance corresponds to a class. This task
is a part of the concept of scene understanding or better explaining the global
context of an image. In the medical image analysis domain, image segmentation
can be used for image-guided interventions, radiotherapy, or improved
radiological diagnostics. In this review, we categorize the leading deep
learning-based medical and non-medical image segmentation solutions into six
main groups of deep architectural, data synthesis-based, loss function-based,
sequenced models, weakly supervised, and multi-task methods and provide a
comprehensive review of the contributions in each of these groups. Further, for
each group, we analyze each variant of these groups and discuss the limitations
of the current approaches and present potential future research directions for
semantic image segmentation.Comment: 45 pages, 16 figures. Accepted for publication in Springer Artificial
Intelligence Revie
Domain Transfer Learning for Object and Action Recognition
Visual recognition has always been a fundamental problem in computer vision. Its task is to learn visual categories using labeled training data and then identify unlabeled new instances of those categories. However, due to the large variations in visual data, visual recognition is still a challenging problem. Handling the variations in captured images is important for real-world applications where unconstrained data acquisition scenarios are widely prevalent.
In this dissertation, we first address the variations between training and testing data. Particularly, for cross-domain object recognition, we propose a Grassmann manifold-based domain adaptation approach to model the domain shift using the geodesic connecting the source and target domains. We further measure the distance between two data points from different domains by integrating the distance of their projections through all the intermediate subspaces along the geodesic. Our proposed approach that exploits all the intermediate subspaces along the geodesic produces a more accurate metric. For cross-view action recognition, we present two effective approaches to learn transferable dictionaries and view-invariant sparse representations. In the first approach, we learn a set of transferable dictionaries where each dictionary corresponds to one camera view. The set of dictionaries is learned simultaneously from sets of correspondence videos taken at different views with the aim of encouraging each video in the set to have the same sparse representation. In the second approach, we relaxes this constraint by encouraging correspondence videos to have similar sparse representations. In addition, we learn a common dictionary that is incoherent to view-specific dictionaries for cross-view action recognition. The set of view-specific dictionaries is learned for specific views while the common dictionary is shared across different views. In this way, we can align view-specific features in the sparse feature spaces spanned by the view-specific dictionary set and transfer the view-shared features in the sparse feature space spanned by the common dictionary.
In order to handle the more general variations in captured images, we also exploit the semantic information to learn discriminative feature representations for visual recognition. Class labels are often organized in a hierarchical taxonomy based on their semantic meanings. We propose a novel multi-layer hierarchical dictionary learning framework for region tagging. Specifically, we learn a node-specific dictionary for each semantic label in the taxonomy and preserve the hierarchial semantic structure in the relationship among these node-dictionaries. Our approach can also transfer knowledge from semantic label at higher levels to help learn the classifiers for semantic labels at lower levels. Moreover, we exploit the semantic attributes for boosting the performance of visual recognition. We encode objects or actions based on attributes that describe them as high-level concepts. We consider two types of attributes. One type of attributes is generated by humans, while the second type is data-driven attributes extracted from data using dictionary learning methods. Attribute-based representation may exhibit variations due to noisy and redundant attributes. We propose a discriminative and compact attribute-based representation by selecting a subset of discriminative attributes from a large attribute set. Three attribute selection criteria are proposed and formulated as a submodular optimization problem. A greedy optimization algorithm is presented and its solution is guaranteed to be at least (1-1/e)-approximation to the optimum
Efficient human annotation schemes for training object class detectors
A central task in computer vision is detecting object classes such as cars and horses
in complex scenes. Training an object class detector typically requires a large set of
images labeled with tight bounding boxes around every object instance. Obtaining
such data requires human annotation, which is very expensive and time consuming.
Alternatively, researchers have tried to train models in a weakly supervised setting (i.e.,
given only image-level labels), which is much cheaper but leads to weaker detectors.
In this thesis, we propose new and efficient human annotation schemes for training
object class detectors that bypass the need for drawing bounding boxes and reduce the
annotation cost while still obtaining high quality object detectors.
First, we propose to train object class detectors from eye tracking data. Instead
of drawing tight bounding boxes, the annotators only need to look at the image and
find the target object. We track the eye movements of annotators while they perform
this visual search task and we propose a technique for deriving object bounding boxes
from these eye fixations. To validate our idea, we augment an existing object detection
dataset with eye tracking data.
Second, we propose a scheme for training object class detectors, which only requires
annotators to verify bounding-boxes produced automatically by the learning
algorithm. Our scheme introduces human verification as a new step into a standard
weakly supervised framework which typically iterates between re-training object detectors
and re-localizing objects in the training images. We use the verification signal
to improve both re-training and re-localization.
Third, we propose another scheme where annotators are asked to click on the center
of an imaginary bounding box, which tightly encloses the object. We then incorporate
these clicks into a weakly supervised object localization technique, to jointly localize
object bounding boxes over all training images. Both our center-clicking and human
verification schemes deliver detectors performing almost as well as those trained in a
fully supervised setting.
Finally, we propose extreme clicking. We ask the annotator to click on four physical
points on the object: the top, bottom, left- and right-most points. This task is more
natural than the traditional way of drawing boxes and these points are easy to find. Our
experiments show that annotating objects with extreme clicking is 5 X faster than the
traditional way of drawing boxes and it leads to boxes of the same quality as the original
ground-truth drawn the traditional way. Moreover, we use the resulting extreme
points to obtain more accurate segmentations than those derived from bounding boxes