25,144 research outputs found
Rain Removal in Traffic Surveillance: Does it Matter?
Varying weather conditions, including rainfall and snowfall, are generally
regarded as a challenge for computer vision algorithms. One proposed solution
to the challenges induced by rain and snowfall is to artificially remove the
rain from images or video using rain removal algorithms. It is the promise of
these algorithms that the rain-removed image frames will improve the
performance of subsequent segmentation and tracking algorithms. However, rain
removal algorithms are typically evaluated on their ability to remove synthetic
rain on a small subset of images. Currently, their behavior is unknown on
real-world videos when integrated with a typical computer vision pipeline. In
this paper, we review the existing rain removal algorithms and propose a new
dataset that consists of 22 traffic surveillance sequences under a broad
variety of weather conditions that all include either rain or snowfall. We
propose a new evaluation protocol that evaluates the rain removal algorithms on
their ability to improve the performance of subsequent segmentation, instance
segmentation, and feature tracking algorithms under rain and snow. If
successful, the de-rained frames of a rain removal algorithm should improve
segmentation performance and increase the number of accurately tracked
features. The results show that a recent single-frame-based rain removal
algorithm increases the segmentation performance by 19.7% on our proposed
dataset, but it eventually decreases the feature tracking performance and
showed mixed results with recent instance segmentation methods. However, the
best video-based rain removal algorithm improves the feature tracking accuracy
by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System
The image ray transform for structural feature detection
The use of analogies to physical phenomena is an exciting paradigm in computer vision that allows unorthodox approaches to feature extraction, creating new techniques with unique properties. A technique known as the "image ray transform" has been developed based upon an analogy to the propagation of light as rays. The transform analogises an image to a set of glass blocks with refractive index linked to pixel properties and then casts a large number of rays through the image. The course of these rays is accumulated into an output image. The technique can successfully extract tubular and circular features and we show successful circle detection, ear biometrics and retinal vessel extraction. The transform has also been extended through the use of multiple rays arranged as a beam to increase robustness to noise, and we show quantitative results for fully automatic ear recognition, achieving 95.2% rank one recognition across 63 subjects
A Cosmic Watershed: the WVF Void Detection Technique
On megaparsec scales the Universe is permeated by an intricate filigree of
clusters, filaments, sheets and voids, the Cosmic Web. For the understanding of
its dynamical and hierarchical history it is crucial to identify objectively
its complex morphological components. One of the most characteristic aspects is
that of the dominant underdense Voids, the product of a hierarchical process
driven by the collapse of minor voids in addition to the merging of large ones.
In this study we present an objective void finder technique which involves a
minimum of assumptions about the scale, structure and shape of voids. Our void
finding method, the Watershed Void Finder (WVF), is based upon the Watershed
Transform, a well-known technique for the segmentation of images. Importantly,
the technique has the potential to trace the existing manifestations of a void
hierarchy. The basic watershed transform is augmented by a variety of
correction procedures to remove spurious structure resulting from sampling
noise. This study contains a detailed description of the WVF. We demonstrate
how it is able to trace and identify, relatively parameter free, voids and
their surrounding (filamentary and planar) boundaries. We test the technique on
a set of Kinematic Voronoi models, heuristic spatial models for a cellular
distribution of matter. Comparison of the WVF segmentations of low noise and
high noise Voronoi models with the quantitatively known spatial characteristics
of the intrinsic Voronoi tessellation shows that the size and shape of the
voids are succesfully retrieved. WVF manages to even reproduce the full void
size distribution function.Comment: 24 pages, 15 figures, MNRAS accepted, for full resolution, see
http://www.astro.rug.nl/~weygaert/tim1publication/watershed.pd
- …