3,744 research outputs found
A Framework for Symmetric Part Detection in Cluttered Scenes
The role of symmetry in computer vision has waxed and waned in importance
during the evolution of the field from its earliest days. At first figuring
prominently in support of bottom-up indexing, it fell out of favor as shape
gave way to appearance and recognition gave way to detection. With a strong
prior in the form of a target object, the role of the weaker priors offered by
perceptual grouping was greatly diminished. However, as the field returns to
the problem of recognition from a large database, the bottom-up recovery of the
parts that make up the objects in a cluttered scene is critical for their
recognition. The medial axis community has long exploited the ubiquitous
regularity of symmetry as a basis for the decomposition of a closed contour
into medial parts. However, today's recognition systems are faced with
cluttered scenes, and the assumption that a closed contour exists, i.e. that
figure-ground segmentation has been solved, renders much of the medial axis
community's work inapplicable. In this article, we review a computational
framework, previously reported in Lee et al. (2013), Levinshtein et al. (2009,
2013), that bridges the representation power of the medial axis and the need to
recover and group an object's parts in a cluttered scene. Our framework is
rooted in the idea that a maximally inscribed disc, the building block of a
medial axis, can be modeled as a compact superpixel in the image. We evaluate
the method on images of cluttered scenes.Comment: 10 pages, 8 figure
Multiscale metabolic modeling of C4 plants: connecting nonlinear genome-scale models to leaf-scale metabolism in developing maize leaves
C4 plants, such as maize, concentrate carbon dioxide in a specialized
compartment surrounding the veins of their leaves to improve the efficiency of
carbon dioxide assimilation. Nonlinear relationships between carbon dioxide and
oxygen levels and reaction rates are key to their physiology but cannot be
handled with standard techniques of constraint-based metabolic modeling. We
demonstrate that incorporating these relationships as constraints on reaction
rates and solving the resulting nonlinear optimization problem yields realistic
predictions of the response of C4 systems to environmental and biochemical
perturbations. Using a new genome-scale reconstruction of maize metabolism, we
build an 18000-reaction, nonlinearly constrained model describing mesophyll and
bundle sheath cells in 15 segments of the developing maize leaf, interacting
via metabolite exchange, and use RNA-seq and enzyme activity measurements to
predict spatial variation in metabolic state by a novel method that optimizes
correlation between fluxes and expression data. Though such correlations are
known to be weak in general, here the predicted fluxes achieve high correlation
with the data, successfully capture the experimentally observed base-to-tip
transition between carbon-importing tissue and carbon-exporting tissue, and
include a nonzero growth rate, in contrast to prior results from similar
methods in other systems. We suggest that developmental gradients may be
particularly suited to the inference of metabolic fluxes from expression data.Comment: 57 pages, 14 figures; submitted to PLoS Computational Biology; source
code available at http://github.com/ebogart/fluxtools and
http://github.com/ebogart/multiscale_c4_sourc
Towards the Success Rate of One: Real-time Unconstrained Salient Object Detection
In this work, we propose an efficient and effective approach for
unconstrained salient object detection in images using deep convolutional
neural networks. Instead of generating thousands of candidate bounding boxes
and refining them, our network directly learns to generate the saliency map
containing the exact number of salient objects. During training, we convert the
ground-truth rectangular boxes to Gaussian distributions that better capture
the ROI regarding individual salient objects. During inference, the network
predicts Gaussian distributions centered at salient objects with an appropriate
covariance, from which bounding boxes are easily inferred. Notably, our network
performs saliency map prediction without pixel-level annotations, salient
object detection without object proposals, and salient object subitizing
simultaneously, all in a single pass within a unified framework. Extensive
experiments show that our approach outperforms existing methods on various
datasets by a large margin, and achieves more than 100 fps with VGG16 network
on a single GPU during inference
Dictionary Learning-based Inpainting on Triangular Meshes
The problem of inpainting consists of filling missing or damaged regions in
images and videos in such a way that the filling pattern does not produce
artifacts that deviate from the original data. In addition to restoring the
missing data, the inpainting technique can also be used to remove undesired
objects. In this work, we address the problem of inpainting on surfaces through
a new method based on dictionary learning and sparse coding. Our method learns
the dictionary through the subdivision of the mesh into patches and rebuilds
the mesh via a method of reconstruction inspired by the Non-local Means method
on the computed sparse codes. One of the advantages of our method is that it is
capable of filling the missing regions and simultaneously removes noise and
enhances important features of the mesh. Moreover, the inpainting result is
globally coherent as the representation based on the dictionaries captures all
the geometric information in the transformed domain. We present two variations
of the method: a direct one, in which the model is reconstructed and restored
directly from the representation in the transformed domain and a second one,
adaptive, in which the missing regions are recreated iteratively through the
successive propagation of the sparse code computed in the hole boundaries,
which guides the local reconstructions. The second method produces better
results for large regions because the sparse codes of the patches are adapted
according to the sparse codes of the boundary patches. Finally, we present and
analyze experimental results that demonstrate the performance of our method
compared to the literature
Deep Learning for Single Image Super-Resolution: A Brief Review
Single image super-resolution (SISR) is a notoriously challenging ill-posed
problem, which aims to obtain a high-resolution (HR) output from one of its
low-resolution (LR) versions. To solve the SISR problem, recently powerful deep
learning algorithms have been employed and achieved the state-of-the-art
performance. In this survey, we review representative deep learning-based SISR
methods, and group them into two categories according to their major
contributions to two essential aspects of SISR: the exploration of efficient
neural network architectures for SISR, and the development of effective
optimization objectives for deep SISR learning. For each category, a baseline
is firstly established and several critical limitations of the baseline are
summarized. Then representative works on overcoming these limitations are
presented based on their original contents as well as our critical
understandings and analyses, and relevant comparisons are conducted from a
variety of perspectives. Finally we conclude this review with some vital
current challenges and future trends in SISR leveraging deep learning
algorithms.Comment: Accepted by IEEE Transactions on Multimedia (TMM
- …