8,101 research outputs found
Curriculum semi-supervised segmentation
This study investigates a curriculum-style strategy for semi-supervised CNN
segmentation, which devises a regression network to learn image-level
information such as the size of a target region. These regressions are used to
effectively regularize the segmentation network, constraining softmax
predictions of the unlabeled images to match the inferred label distributions.
Our framework is based on inequality constraints that tolerate uncertainties
with inferred knowledge, e.g., regressed region size, and can be employed for a
large variety of region attributes. We evaluated our proposed strategy for left
ventricle segmentation in magnetic resonance images (MRI), and compared it to
standard proposal-based semi-supervision strategies. Our strategy leverages
unlabeled data in more efficiently, and achieves very competitive results,
approaching the performance of full-supervision.Comment: Accepted as paper as MICCAI 2O1
Semi-Supervised Self-Taught Deep Learning for Finger Bones Segmentation
Segmentation stands at the forefront of many high-level vision tasks. In this
study, we focus on segmenting finger bones within a newly introduced
semi-supervised self-taught deep learning framework which consists of a student
network and a stand-alone teacher module. The whole system is boosted in a
life-long learning manner wherein each step the teacher module provides a
refinement for the student network to learn with newly unlabeled data.
Experimental results demonstrate the superiority of the proposed method over
conventional supervised deep learning methods.Comment: IEEE BHI 2019 accepte
Multiple Instance Curriculum Learning for Weakly Supervised Object Detection
When supervising an object detector with weakly labeled data, most existing
approaches are prone to trapping in the discriminative object parts, e.g.,
finding the face of a cat instead of the full body, due to lacking the
supervision on the extent of full objects. To address this challenge, we
incorporate object segmentation into the detector training, which guides the
model to correctly localize the full objects. We propose the multiple instance
curriculum learning (MICL) method, which injects curriculum learning (CL) into
the multiple instance learning (MIL) framework. The MICL method starts by
automatically picking the easy training examples, where the extent of the
segmentation masks agree with detection bounding boxes. The training set is
gradually expanded to include harder examples to train strong detectors that
handle complex images. The proposed MICL method with segmentation in the loop
outperforms the state-of-the-art weakly supervised object detectors by a
substantial margin on the PASCAL VOC datasets.Comment: Published in BMVC 201
Constrained Deep Networks: Lagrangian Optimization via Log-Barrier Extensions
This study investigates the optimization aspects of imposing hard inequality
constraints on the outputs of CNNs. In the context of deep networks,
constraints are commonly handled with penalties for their simplicity, and
despite their well-known limitations. Lagrangian-dual optimization has been
largely avoided, except for a few recent works, mainly due to the computational
complexity and stability/convergence issues caused by alternating explicit dual
updates/projections and stochastic optimization. Several studies showed that,
surprisingly for deep CNNs, the theoretical and practical advantages of
Lagrangian optimization over penalties do not materialize in practice. We
propose log-barrier extensions, which approximate Lagrangian optimization of
constrained-CNN problems with a sequence of unconstrained losses. Unlike
standard interior-point and log-barrier methods, our formulation does not need
an initial feasible solution. Furthermore, we provide a new technical result,
which shows that the proposed extensions yield an upper bound on the duality
gap. This generalizes the duality-gap result of standard log-barriers, yielding
sub-optimality certificates for feasible solutions. While sub-optimality is not
guaranteed for non-convex problems, our result shows that log-barrier
extensions are a principled way to approximate Lagrangian optimization for
constrained CNNs via implicit dual variables. We report comprehensive weakly
supervised segmentation experiments, with various constraints, showing that our
formulation outperforms substantially the existing constrained-CNN methods,
both in terms of accuracy, constraint satisfaction and training stability, more
so when dealing with a large number of constraints
Dynamic Adaptation on Non-Stationary Visual Domains
Domain adaptation aims to learn models on a supervised source domain that
perform well on an unsupervised target. Prior work has examined domain
adaptation in the context of stationary domain shifts, i.e. static data sets.
However, with large-scale or dynamic data sources, data from a defined domain
is not usually available all at once. For instance, in a streaming data
scenario, dataset statistics effectively become a function of time. We
introduce a framework for adaptation over non-stationary distribution shifts
applicable to large-scale and streaming data scenarios. The model is adapted
sequentially over incoming unsupervised streaming data batches. This enables
improvements over several batches without the need for any additionally
annotated data. To demonstrate the effectiveness of our proposed framework, we
modify associative domain adaptation to work well on source and target data
batches with unequal class distributions. We apply our method to several
adaptation benchmark datasets for classification and show improved classifier
accuracy not only for the currently adapted batch, but also when applied on
future stream batches. Furthermore, we show the applicability of our
associative learning modifications to semantic segmentation, where we achieve
competitive results
- …