28 research outputs found
The brightness clustering transform and locally contrasting keypoints
In recent years a new wave of feature descriptors has been presented to the computer vision community, ORB, BRISK and FREAK amongst others. These new descriptors allow reduced time and memory consumption on the processing and storage stages of tasks such as image matching or visual odometry, enabling real time applications. The problem is now the lack of fast interest point detectors with good repeatability to use with these new descriptors. We present a new blob- detector which can be implemented in real time and is faster than most of the currently used feature-detectors. The detection is achieved with an innovative non-deterministic low-level operator called the Brightness Clustering Transform (BCT). The BCT can be thought as a coarse-to- fine search through scale spaces for the true derivative of the image; it also mimics trans-saccadic perception of human vision. We call the new algorithm Locally Contrasting Keypoints detector or LOCKY. Showing good repeatability and robustness to image transformations included in the Oxford dataset, LOCKY is amongst the fastest affine-covariant feature detectors
Semi-independent Stereo Visual Odometry for Different Field of View Cameras
International audienceThis paper presents a pipeline for stereo visual odometry using cameras with different fields of view. It gives a proof of concept about how a constraint on the respective field of view of each camera can lead to both an accurate 3D reconstruction and a robust pose estimation. Indeed, when considering a fixed resolution, a narrow field of view has a higher angular resolution and can preserve image texture details. On the other hand, a wide field of view allows to track features over longer periods since the overlap between two successive frames is more substantial. We propose a semi-independent stereo system where each camera performs individually temporal multi-view optimization but their initial parameters are still jointly optimized in an iterative framework. Furthermore , the concept of lead and follow camera is introduced to adaptively propagate information between the cameras. We evaluate the method qualitatively on two indoor datasets, and quantitatively on a synthetic dataset to allow the comparison across different fields of view
Bilateral Functions for Global Motion Modeling
This paper proposes modeling motion in a bilateral domain that augments spatial information with the motion itself. We use the bilateral domain to reformulate a piecewise smooth constraint as continuous global modeling constraint. The resultant model can be robustly computed from highly noisy scattered feature points using a global minimization. We demonstrate how the model can reliably obtain large numbers of good quality correspondences over wide baselines, while keeping outliers to a minimum