208,234 research outputs found
Computational temporal ghost imaging
Ghost imaging is a fascinating process, where light interacting with an
object is recorded without resolution, but the shape of the object is
nevertheless retrieved, thanks to quantum or classical correlations of this
interacting light with either a computed or detected random signal. Recently,
ghost imaging has been extended to a time object, by using several thousands
copies of this periodic object. Here, we present a very simple device, inspired
by computational ghost imaging, that allows the retrieval of a single
non-reproducible, periodic or non-periodic, temporal signal. The reconstruction
is performed by a single shot, spatially multiplexed, measurement of the
spatial intensity correlations between computer-generated random images and the
images, modulated by a temporal signal, recorded and summed on a chip CMOS
camera used with no temporal resolution. Our device allows the reconstruction
of either a single temporal signal with monochrome images or
wavelength-multiplexed signals with color images
Towards the Success Rate of One: Real-time Unconstrained Salient Object Detection
In this work, we propose an efficient and effective approach for
unconstrained salient object detection in images using deep convolutional
neural networks. Instead of generating thousands of candidate bounding boxes
and refining them, our network directly learns to generate the saliency map
containing the exact number of salient objects. During training, we convert the
ground-truth rectangular boxes to Gaussian distributions that better capture
the ROI regarding individual salient objects. During inference, the network
predicts Gaussian distributions centered at salient objects with an appropriate
covariance, from which bounding boxes are easily inferred. Notably, our network
performs saliency map prediction without pixel-level annotations, salient
object detection without object proposals, and salient object subitizing
simultaneously, all in a single pass within a unified framework. Extensive
experiments show that our approach outperforms existing methods on various
datasets by a large margin, and achieves more than 100 fps with VGG16 network
on a single GPU during inference
Iterative Object and Part Transfer for Fine-Grained Recognition
The aim of fine-grained recognition is to identify sub-ordinate categories in
images like different species of birds. Existing works have confirmed that, in
order to capture the subtle differences across the categories, automatic
localization of objects and parts is critical. Most approaches for object and
part localization relied on the bottom-up pipeline, where thousands of region
proposals are generated and then filtered by pre-trained object/part models.
This is computationally expensive and not scalable once the number of
objects/parts becomes large. In this paper, we propose a nonparametric
data-driven method for object and part localization. Given an unlabeled test
image, our approach transfers annotations from a few similar images retrieved
in the training set. In particular, we propose an iterative transfer strategy
that gradually refine the predicted bounding boxes. Based on the located
objects and parts, deep convolutional features are extracted for recognition.
We evaluate our approach on the widely-used CUB200-2011 dataset and a new and
large dataset called Birdsnap. On both datasets, we achieve better results than
many state-of-the-art approaches, including a few using oracle (manually
annotated) bounding boxes in the test images.Comment: To appear in ICME 2017 as an oral pape
The AAU Multimodal Annotation Toolboxes: Annotating Objects in Images and Videos
This tech report gives an introduction to two annotation toolboxes that
enable the creation of pixel and polygon-based masks as well as bounding boxes
around objects of interest. Both toolboxes support the annotation of sequential
images in the RGB and thermal modalities. Each annotated object is assigned a
classification tag, a unique ID, and one or more optional meta data tags. The
toolboxes are written in C++ with the OpenCV and Qt libraries and are operated
by using the visual interface and the extensive range of keyboard shortcuts.
Pre-built binaries are available for Windows and MacOS and the tools can be
built from source under Linux as well. So far, tens of thousands of frames have
been annotated using the toolboxes.Comment: 6 pages, 10 figure
Searching for comets on the World Wide Web: The orbit of 17P/Holmes from the behavior of photographers
We performed an image search for "Comet Holmes," using the Yahoo Web search
engine, on 2010 April 1. Thousands of images were returned. We astrometrically
calibrated---and therefore vetted---the images using the Astrometry.net system.
The calibrated image pointings form a set of data points to which we can fit a
test-particle orbit in the Solar System, marginalizing over image dates and
detecting outliers. The approach is Bayesian and the model is, in essence, a
model of how comet astrophotographers point their instruments. In this work, we
do not measure the position of the comet within each image, but rather use the
celestial position of the whole image to infer the orbit. We find very strong
probabilistic constraints on the orbit, although slightly off the JPL
ephemeris, probably due to limitations of our model. Hyperparameters of the
model constrain the reliability of date meta-data and where in the image
astrophotographers place the comet; we find that ~70 percent of the meta-data
are correct and that the comet typically appears in the central third of the
image footprint. This project demonstrates that discoveries and measurements
can be made using data of extreme heterogeneity and unknown provenance. As the
size and diversity of astronomical data sets continues to grow, approaches like
ours will become more essential. This project also demonstrates that the Web is
an enormous repository of astronomical information; and that if an object has
been given a name and photographed thousands of times by observers who post
their images on the Web, we can (re-)discover it and infer its dynamical
properties.Comment: As published. Changes in v2: data-driven initialization rather than
JPL; added figures; clarified tex
Unsupervised Learning of Visual Representations using Videos
Is strong supervision necessary for learning a good visual representation? Do
we really need millions of semantically-labeled images to train a Convolutional
Neural Network (CNN)? In this paper, we present a simple yet surprisingly
powerful approach for unsupervised learning of CNN. Specifically, we use
hundreds of thousands of unlabeled videos from the web to learn visual
representations. Our key idea is that visual tracking provides the supervision.
That is, two patches connected by a track should have similar visual
representation in deep feature space since they probably belong to the same
object or object part. We design a Siamese-triplet network with a ranking loss
function to train this CNN representation. Without using a single image from
ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train
an ensemble of unsupervised networks that achieves 52% mAP (no bounding box
regression). This performance comes tantalizingly close to its
ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We
also show that our unsupervised network can perform competitively in other
tasks such as surface-normal estimation
Boosting for Generic 2D/3D Object Recognition
Generic object recognition is an important function of the human visual system. For an artificial vision system to be able to emulate the human perception abilities, it should also be able to perform generic object recognition.
In this thesis, we address the generic object recognition problem and present different approaches and models which tackle different aspects of this difficult problem.
First, we present a model for generic 2D object recognition from complex 2D images. The model exploits only appearance-based information, in the form of a combination of texture and color cues, for binary classification of 2D object classes. Learning is accomplished in a weakly supervised manner using Boosting.
However, we live in a 3D world and the ability to recognize 3D objects is very important for any vision system. Therefore, we present a model for generic recognition of 3D objects from range images. Our model makes use of a combination of simple local shape descriptors extracted from range images for recognizing 3D object categories, as shape is an important information provided by range images. Moreover, we present a novel dataset for generic object recognition that provides 2D and range images about different object classes using a Time-of-Flight (ToF) camera.
As the surrounding world contains thousands of different object categories, recognizing many different object classes is important as well. Therefore, we extend our generic 3D object recognition model to deal with the multi-class learning and recognition task.
Moreover, we extend the multi-class recognition model by introducing a novel model which uses a combination of appearance-based information extracted from 2D images and range-based (shape) information extracted from range images for multi-class generic 3D object recognition and promising results are obtained
LSDA: Large Scale Detection Through Adaptation
A major challenge in scaling object detection is the difficulty of obtaining
labeled images for large numbers of categories. Recently, deep convolutional
neural networks (CNNs) have emerged as clear winners on object classification
benchmarks, in part due to training with 1.2M+ labeled classification images.
Unfortunately, only a small fraction of those labels are available for the
detection task. It is much cheaper and easier to collect large quantities of
image-level labels from search engines than it is to collect detection data and
label it with precise bounding boxes. In this paper, we propose Large Scale
Detection through Adaptation (LSDA), an algorithm which learns the difference
between the two tasks and transfers this knowledge to classifiers for
categories without bounding box annotated data, turning them into detectors.
Our method has the potential to enable detection for the tens of thousands of
categories that lack bounding box annotations, yet have plenty of
classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge
demonstrates the efficacy of our approach. This algorithm enables us to produce
a >7.6K detector by using available classification data from leaf nodes in the
ImageNet tree. We additionally demonstrate how to modify our architecture to
produce a fast detector (running at 2fps for the 7.6K detector). Models and
software are available a
- …