17,463 research outputs found
Generalized Kernel-based Visual Tracking
In this work we generalize the plain MS trackers and attempt to overcome
standard mean shift trackers' two limitations.
It is well known that modeling and maintaining a representation of a target
object is an important component of a successful visual tracker.
However, little work has been done on building a robust template model for
kernel-based MS tracking. In contrast to building a template from a single
frame, we train a robust object representation model from a large amount of
data. Tracking is viewed as a binary classification problem, and a
discriminative classification rule is learned to distinguish between the object
and background. We adopt a support vector machine (SVM) for training. The
tracker is then implemented by maximizing the classification score. An
iterative optimization scheme very similar to MS is derived for this purpose.Comment: 12 page
Gaussian Processes with Context-Supported Priors for Active Object Localization
We devise an algorithm using a Bayesian optimization framework in conjunction
with contextual visual data for the efficient localization of objects in still
images. Recent research has demonstrated substantial progress in object
localization and related tasks for computer vision. However, many current
state-of-the-art object localization procedures still suffer from inaccuracy
and inefficiency, in addition to failing to provide a principled and
interpretable system amenable to high-level vision tasks. We address these
issues with the current research.
Our method encompasses an active search procedure that uses contextual data
to generate initial bounding-box proposals for a target object. We train a
convolutional neural network to approximate an offset distance from the target
object. Next, we use a Gaussian Process to model this offset response signal
over the search space of the target. We then employ a Bayesian active search
for accurate localization of the target.
In experiments, we compare our approach to a state-of-theart bounding-box
regression method for a challenging pedestrian localization task. Our method
exhibits a substantial improvement over this baseline regression method.Comment: 10 pages, 4 figure
Pixelation effects in weak lensing
Weak gravitational lensing can be used to investigate both dark matter and dark energy but requires accurate measurements of the shapes of faint, distant galaxies. Such measurements are hindered by the finite resolution and pixel scale of digital cameras. We investigate the optimum choice of pixel scale for a space-based mission, using the engineering model and survey strategy of the proposed Supernova Acceleration Probe as a baseline. We do this by simulating realistic astronomical images containing a known input shear signal and then attempting to recover the signal using the Rhodes, Refregier, & Groth algorithm. We find that the quality of shear measurement is always improved by smaller pixels. However, in practice, telescopes are usually limited to a finite number of pixels and operational life span, so the total area of a survey increases with pixel size. We therefore fix the survey lifetime and the number of pixels in the focal plane while varying the pixel scale, thereby effectively varying the survey size. In a pure trade-off for image resolution versus survey area, we find that measurements of the matter power spectrum would have minimum statistical error with a pixel scale of 0.09 '' for a 0.14 '' FWHM point-spread function (PSF). The pixel scale could be increased to similar to 0.16 '' if images dithered by exactly half-pixel offsets were always available. Some of our results do depend on our adopted shape measurement method and should be regarded as an upper limit: future pipelines may require smaller pixels to overcome systematic floors not yet accessible, and, in certain circumstances, measuring the shape of the PSF might be more difficult than those of galaxies. However, the relative trends in our analysis are robust, especially those of the surface density of resolved galaxies. Our approach thus provides a snapshot of potential in available technology, and a practical counterpart to analytic studies of pixelation, which necessarily assume an idealized shape measurement method
Deep Convolutional Neural Networks as strong gravitational lens detectors
Future large-scale surveys with high resolution imaging will provide us with
a few new strong galaxy-scale lenses. These strong lensing systems
however will be contained in large data amounts which are beyond the capacity
of human experts to visually classify in a unbiased way. We present a new
strong gravitational lens finder based on convolutional neural networks (CNNs).
The method was applied to the Strong Lensing challenge organised by the Bologna
Lens Factory. It achieved first and third place respectively on the space-based
data-set and the ground-based data-set. The goal was to find a fully automated
lens finder for ground-based and space-based surveys which minimizes human
inspect. We compare the results of our CNN architecture and three new
variations ("invariant" "views" and "residual") on the simulated data of the
challenge. Each method has been trained separately 5 times on 17 000 simulated
images, cross-validated using 3 000 images and then applied to a 100 000 image
test set. We used two different metrics for evaluation, the area under the
receiver operating characteristic curve (AUC) score and the recall with no
false positive (). For ground based data our
best method achieved an AUC score of and a
of . For space-based data our best
method achieved an AUC score of and a
of . On space-based data adding dihedral invariance to the CNN
architecture diminished the overall score but achieved a higher no
contamination recall. We found that using committees of 5 CNNs produce the best
recall at zero contamination and consistenly score better AUC than a single
CNN. We found that for every variation of our CNN lensfinder, we achieve AUC
scores close to within .Comment: 9 pages, accepted to A&
- …