7,967 research outputs found
Homography-based ground plane detection using a single on-board camera
This study presents a robust method for ground plane detection in vision-based systems with a non-stationary camera. The proposed method is based on the reliable estimation of the homography between ground planes in successive images. This homography is computed using a feature matching approach, which in contrast to classical approaches to on-board motion estimation does not require explicit ego-motion calculation. As opposed to it, a novel homography calculation method based on a linear estimation framework is presented. This framework provides predictions of the ground plane transformation matrix that are dynamically updated with new measurements. The method is specially suited for challenging environments, in particular traffic scenarios, in which the information is scarce and the homography computed from the images is usually inaccurate or erroneous. The proposed estimation framework is able to remove erroneous measurements and to correct those that are inaccurate, hence producing a reliable homography estimate at each instant. It is based on the evaluation of the difference between the predicted and the observed transformations, measured according to the spectral norm of the associated matrix of differences. Moreover, an example is provided on how to use the information extracted from ground plane estimation to achieve object detection and tracking. The method has been successfully demonstrated for the detection of moving vehicles in traffic environments
Color homography
We show the surprising result that colors across a change in viewing
condition (changing light color, shading and camera) are related by a
homography. Our homography color correction application delivers improved color
fidelity compared with the linear least-square.Comment: Accepted by Progress in Colour Studies 201
SuperPoint: Self-Supervised Interest Point Detection and Description
This paper presents a self-supervised framework for training interest point
detectors and descriptors suitable for a large number of multiple-view geometry
problems in computer vision. As opposed to patch-based neural networks, our
fully-convolutional model operates on full-sized images and jointly computes
pixel-level interest point locations and associated descriptors in one forward
pass. We introduce Homographic Adaptation, a multi-scale, multi-homography
approach for boosting interest point detection repeatability and performing
cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on
the MS-COCO generic image dataset using Homographic Adaptation, is able to
repeatedly detect a much richer set of interest points than the initial
pre-adapted deep model and any other traditional corner detector. The final
system gives rise to state-of-the-art homography estimation results on HPatches
when compared to LIFT, SIFT and ORB.Comment: Camera-ready version for CVPR 2018 Deep Learning for Visual SLAM
Workshop (DL4VSLAM2018
Detecting shadows and low-lying objects in indoor and outdoor scenes using homographies
Many computer vision applications apply background suppression techniques for the detection and segmentation of moving objects in a scene. While these algorithms tend to work well in controlled conditions they often fail when applied to unconstrained real-world environments. This paper describes a system that detects and removes erroneously segmented foreground regions that are close to a ground plane. These regions include shadows, changing background objects and other low-lying objects such as leaves and rubbish. The system uses a set-up of two or more cameras and requires no 3D reconstruction or depth analysis of the regions. Therefore, a strong camera calibration of the set-up is not necessary. A geometric constraint called a homography is exploited to determine if foreground points are on or above the ground plane. The system takes advantage of the fact that regions in images off the homography plane will not correspond after a homography transformation. Experimental results using real world scenes from a pedestrian tracking application illustrate the effectiveness of the proposed approach
A robust and efficient video representation for action recognition
This paper introduces a state-of-the-art video representation and applies it
to efficient action recognition and detection. We first propose to improve the
popular dense trajectory features by explicit camera motion estimation. More
specifically, we extract feature point matches between frames using SURF
descriptors and dense optical flow. The matches are used to estimate a
homography with RANSAC. To improve the robustness of homography estimation, a
human detector is employed to remove outlier matches from the human body as
human motion is not constrained by the camera. Trajectories consistent with the
homography are considered as due to camera motion, and thus removed. We also
use the homography to cancel out camera motion from the optical flow. This
results in significant improvement on motion-based HOF and MBH descriptors. We
further explore the recent Fisher vector as an alternative feature encoding
approach to the standard bag-of-words histogram, and consider different ways to
include spatial layout information in these encodings. We present a large and
varied set of evaluations, considering (i) classification of short basic
actions on six datasets, (ii) localization of such actions in feature-length
movies, and (iii) large-scale recognition of complex events. We find that our
improved trajectory features significantly outperform previous dense
trajectories, and that Fisher vectors are superior to bag-of-words encodings
for video recognition tasks. In all three tasks, we show substantial
improvements over the state-of-the-art results
Two-dimensional homography-based correction of positional errors in widefield MRT images
A steradian of the southern sky has been imaged at 151.5 MHz using the
Mauritius Radio Telescope (MRT). These images show systematics in positional
errors of sources when compared to source positions in the Molonglo Reference
Catalogue (MRC). We have applied two-dimensional homography to correct for
systematic positional errors in the image domain and thereby avoid
re-processing the visibility data. Positions of bright (above 15-{\sigma})
point sources, common to MRT catalogue and MRC, are used to set up an
over-determined system to solve for the homography matrix. After correction the
errors are found to be within 10% of the beamwidth for these bright sources and
the systematics are eliminated from the images. This technique will be of
relevance to the new generation radio telescopes where, owing to huge data
rates, only images after a certain integration would be recorded as opposed to
raw visibilities. It is also interesting to note how our investigations cued to
possible errors in the array geometry. The analysis of positional errors of
sources showed that MRT images are stretched in declination by ~1 part in 1000.
This translates to a compression of the baseline scale in the visibility
domain. The array geometry was re-estimated using the astrometry principle. The
estimates show an error of ~1 mm/m, which results in an error of about half a
wavelength at 150 MHz for a 1 km north-south baseline. The estimates also
indicate that the east-west arm is inclined by an angle of ~40 arcsec to the
true east-west direction.Comment: 9 pages, 8 figures, accepted for publication in MNRA
- …