515 research outputs found

    Camera Calibration from Dynamic Silhouettes Using Motion Barcodes

    Full text link
    Computing the epipolar geometry between cameras with very different viewpoints is often problematic as matching points are hard to find. In these cases, it has been proposed to use information from dynamic objects in the scene for suggesting point and line correspondences. We propose a speed up of about two orders of magnitude, as well as an increase in robustness and accuracy, to methods computing epipolar geometry from dynamic silhouettes. This improvement is based on a new temporal signature: motion barcode for lines. Motion barcode is a binary temporal sequence for lines, indicating for each frame the existence of at least one foreground pixel on that line. The motion barcodes of two corresponding epipolar lines are very similar, so the search for corresponding epipolar lines can be limited only to lines having similar barcodes. The use of motion barcodes leads to increased speed, accuracy, and robustness in computing the epipolar geometry.Comment: Update metadat

    View point robust visual search technique

    Get PDF
    In this thesis, we have explored visual search techniques for images taken from diferent view points and have tried to enhance the matching capability under view point changes. We have proposed the Homography based back-projection as post-processing stage of Compact Descriptors for Visual Search (CDVS), the new MPEG standard; moreover, we have deined the aine adapted scale space based aine detection, which steers the Gaussian scale space to capture the features from aine transformed images; we have also developed the corresponding gradient based aine descriptor. Using these proposed techniques, the image retrieval robustness to aine transformations has been signiicantly improved. The irst chapter of this thesis introduces the background on visual search. In the second chapter, we propose a homography based back-projection used as the postprocessing stage of CDVS to improve the resilience to view point changes. The theory behind this proposal is that each perspective projection of the image of 2D object can be simulated as an aine transformation. Each pair of aine transformations are mathematically related by homography matrix. Given that matrix, the image can be back-projected to simulate the image of another view point. In this way, the real matched images can then be declared as matching because the perspective distortion has been reduced by the back-projection. An accurate homography estimation from the images of diferent view point requires at least 4 correspondences, which could be ofered by the CDVS pipeline. In this way, the homography based back-projection can be used to scrutinize the images with not enough matched keypoints. If they contain some homography relations, the perspective distortion can then be reduced exploiting the few provided correspondences. In the experiment, this technique has been proved to be quite efective especially to the 2D object images. The third chapter introduces the scale space, which is also the kernel to the feature detection for the scale invariant visual search techniques. Scale space, which is made by a series of Gaussian blurred images, represents the image structures at diferent level of details. The Gaussian smoothed images in the scale space result in feature detection being not invariant to aine transformations. That is the reason why scale invariant visual search techniques are sensitive to aine transformations. Thus, in this chapter, we propose an aine adapted scale space, which employs the aine steered Gaussian ilters to smooth the images. This scale space is lexible to diferent aine transformations and it well represents the image structures from diferent view points. With the help of this structure, the features from diferent view points can be well captured. In practice, the scale invariant visual search techniques have employed a pyramid structure to speed up the construction. By employing the aine Gaussian scale space principles, we also propose two structures to build the aine Gaussian scale space. The structure of aine Gaussian scale space is similar to the pyramid structure because of the similiar sampling and cascading iii properties. Conversely, the aine Laplacian of Gaussian (LoG) structure is completely diferent. The Laplacian operator, under aine transformation, is hard to be aine deformed. Diferently from a simple Laplacian operation on the scale space to build the general LoG construction, the aine LoG can only be obtained by aine LoG convolution and the cascade implementations on the aine scale space. Using our proposed structures, both the aine Gaussian scale space and aine LoG can be constructed. We have also explored the aine scale space implementation in frequency domain. In the second chapter, we will also explore the spectrum of Gaussian image smoothing under the aine transformation, and propose two structures. General speaking, the implementation in frequency domain is more robust to aine transformations at the expense of a higher computational complexity. It makes sense to adopt an aine descriptor for the aine invariant visual search. In the fourth chapter, we will propose an aine invariant feature descriptor based on aine gradient. Currently, the state of the art feature descriptors, including SIFT and Gradient location and orientation histogram (GLOH), are based on the histogram of image gradient around the detected features. If the image gradient is calculated as the diference of the adjacent pixels, it will not be aine invariant. Thus in that chapter, we irst propose an aine gradient which will contribute the aine invariance to the descriptor. This aine gradient will be calculated directly by the derivative of the aine Gaussian blurred images. To simplify the processing, we will also create the corresponding aine Gaussian derivative ilters for diferent detected scales to quickly generate the aine gradient. With this aine gradient, we can apply the same scheme of SIFT descriptor to generate the gradient histogram. By normalizing the histogram, the aine descriptor can then be formed. This aine descriptor is not only aine invariant but also rotation invariant, because the direction of the area to form the histogram is determined by the main direction of the gradient around the features. In practice, this aine descriptor is fully aine invariant and its performance for image matching is extremely good. In the conclusions chapter, we draw some conclusions and describe some future work

    Algorithms for trajectory integration in multiple views

    Get PDF
    PhDThis thesis addresses the problem of deriving a coherent and accurate localization of moving objects from partial visual information when data are generated by cameras placed in di erent view angles with respect to the scene. The framework is built around applications of scene monitoring with multiple cameras. Firstly, we demonstrate how a geometric-based solution exploits the relationships between corresponding feature points across views and improves accuracy in object location. Then, we improve the estimation of objects location with geometric transformations that account for lens distortions. Additionally, we study the integration of the partial visual information generated by each individual sensor and their combination into one single frame of observation that considers object association and data fusion. Our approach is fully image-based, only relies on 2D constructs and does not require any complex computation in 3D space. We exploit the continuity and coherence in objects' motion when crossing cameras' elds of view. Additionally, we work under the assumption of planar ground plane and wide baseline (i.e. cameras' viewpoints are far apart). The main contributions are: i) the development of a framework for distributed visual sensing that accounts for inaccuracies in the geometry of multiple views; ii) the reduction of trajectory mapping errors using a statistical-based homography estimation; iii) the integration of a polynomial method for correcting inaccuracies caused by the cameras' lens distortion; iv) a global trajectory reconstruction algorithm that associates and integrates fragments of trajectories generated by each camera

    Exploring Motion Signatures for Vision-Based Tracking, Recognition and Navigation

    Get PDF
    As cameras become more and more popular in intelligent systems, algorithms and systems for understanding video data become more and more important. There is a broad range of applications, including object detection, tracking, scene understanding, and robot navigation. Besides the stationary information, video data contains rich motion information of the environment. Biological visual systems, like human and animal eyes, are very sensitive to the motion information. This inspires active research on vision-based motion analysis in recent years. The main focus of motion analysis has been on low level motion representations of pixels and image regions. However, the motion signatures can benefit a broader range of applications if further in-depth analysis techniques are developed. In this dissertation, we mainly discuss how to exploit motion signatures to solve problems in two applications: object recognition and robot navigation. First, we use bird species recognition as the application to explore motion signatures for object recognition. We begin with study of the periodic wingbeat motion of flying birds. To analyze the wing motion of a flying bird, we establish kinematics models for bird wings, and obtain wingbeat periodicity in image frames after the perspective projection. Time series of salient extremities on bird images are extracted, and the wingbeat frequency is acquired for species classification. Physical experiments show that the frequency based recognition method is robust to segmentation errors and measurement lost up to 30%. In addition to the wing motion, the body motion of the bird is also analyzed to extract the flying velocity in 3D space. An interacting multi-model approach is then designed to capture the combined object motion patterns and different environment conditions. The proposed systems and algorithms are tested in physical experiments, and the results show a false positive rate of around 20% with a low false negative rate close to zero. Second, we explore motion signatures for vision-based vehicle navigation. We discover that motion vectors (MVs) encoded in Moving Picture Experts Group (MPEG) videos provide rich information of the motion in the environment, which can be used to reconstruct the vehicle ego-motion and the structure of the scene. However, MVs suffer from high noise level. To handle the challenge, an error propagation model for MVs is first proposed. Several steps, including MV merging, plane-at-infinity elimination, and planar region extraction, are designed to further reduce noises. The extracted planes are used as landmarks in an extended Kalman filter (EKF) for simultaneous localization and mapping. Results show that the algorithm performs localization and plane mapping with a relative trajectory error below 5:1%. Exploiting the fact that MVs encodes both environment information and moving obstacles, we further propose to track moving objects at the same time of localization and mapping. This enables the two critical navigation functionalities, localization and obstacle avoidance, to be performed in a single framework. MVs are labeled as stationary or moving according to their consistency to geometric constraints. Therefore, the extracted planes are separated into moving objects and the stationary scene. Multiple EKFs are used to track the static scene and the moving objects simultaneously. In physical experiments, we show a detection rate of moving objects at 96:6% and a mean absolute localization error below 3:5 meters
    • …
    corecore