6 research outputs found

    A High-Precision Calibration Method for Stereo Vision System

    Get PDF

    Reconfigurable Forward Homography Estimation System for Real-Time Applications

    Get PDF
    Image processing and computer vision algorithms extensively use projections, such as homography, as one of the processing steps. Systems for homography calculation usually observe homography as an inverse problem and provide an exact solution. However, the systems processing larger resolution images cannot meet inherently tight real-time constraints. Look-up table based systems provide an option for forward homography solutions, but they require large memory availability. Recent compressed look-up table methods reduce the memory requirements at the expense of lower peak signal-to-noise-ratio. In this work, we present a forward homography estimation algorithm which provides higher image quality than compressed look-up table methods. The algorithm is based on bounding the homography error, and neglecting the pixels out of the determined bound. The presented FPGA implementation of the estimation system requires a small amount of hardware, and no memory storage. The prototype system project an image frame onto a spherical surface at 295 Mpixels/s rate which is, up to our knowledge, currently the fastest homography system

    Image blending using graph cut method for image mosaicing

    Get PDF
    In this research work, feature based image mosaicing technique and image blending using graph cut method has been proposed. The image mosaicing algorithms can be divided into two broad categories. The direct method and the feature based method. The first is the direct method or the intensity based method and the second one is based on image features. The direct methods need an ambient initialization whereas, Feature based methods does not require initialization during registration. The feature based techniques are followed by the four primary steps: feature extraction, feature matching, transformation model estimation, image resampling and transformation, and image blending. Harris corner detection, SIFT and SURF are such algorithms which are based on the feature detection for the accomplishment of image mosaicing, but the algorithms has their own limitations as well as advantages according to the applications concerned. The proposed method employs the Harris corner detection algorithm for corner detection. The features are detected and the feature descriptors are formed around the corners. The feature descriptors from one image are matched with other image for the best closeness and only those features are kept, rest are discarded. The transformation model is estimated from the features and the image is warped correspondingly. After the image is warped on a common mosaic plane, the last step is to remove the intensity seam. Graph cut method with minimum cut/ maximum flow algorithm is used for the purpose of image blending. A new method for the optimisation of the cut in the graph cut has been proposed in the research paper

    Height inspection of wafer bumps without explicit 3D reconstruction.

    Get PDF
    by Dong, Mei.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 83-90).Abstracts in English and Chinese.INTRODUCTION --- p.1Chapter 1.1 --- Bump Height Inspection --- p.1Chapter 1.2 --- Our Height Inspection System --- p.2Chapter 1.3 --- Thesis Outline --- p.3BACKGROUND --- p.5Chapter 2.1 --- Wafer Bumps --- p.5Chapter 2.2 --- Common Defects of Wafer Bumps --- p.7Chapter 2.3 --- Traditional Methods for Bump Inspection --- p.11BIPLANAR DISPARITY METHOD --- p.22Chapter 3.1 --- Problem Nature --- p.22Chapter 3.2 --- System Overview --- p.25Chapter 3.3 --- Biplanar Disparity Matrix D --- p.30Chapter 3.4 --- Planar Homography --- p.36Chapter 3.4.1 --- Planar Homography --- p.36Chapter 3.4.2 --- Homography Estimation --- p.39Chapter 3.5 --- Harris Corner Detector --- p.45Chapter 3.6 --- Experiments --- p.47Chapter 3.6.1 --- Synthetic Experiments --- p.47Chapter 3.6.2 --- Real image experiment --- p.52Chapter 3.7 --- Conclusion and problems --- p.61PARAPLANAR DISPARITY METHOD --- p.62Chapter 4.1 --- The Parallel Constraint --- p.63Chapter 4.2 --- Homography estimation --- p.66Chapter 4.3. --- Experiment: --- p.69Chapter 4.3.1 --- Synthetic Experiment: --- p.69Chapter 4.3.2 --- Real Image Experiment: --- p.74CONCLUSION AND FUTURE WORK --- p.80Chapter 5.1 --- Summary of the contributions --- p.80Chapter 5.2 --- Future Work --- p.81Publication related to this work: --- p.83BIBLIOGRAPHY --- p.8

    Method of on road vehicle tracking

    Get PDF

    The Extraction and Use of Image Planes for Three-dimensional Metric Reconstruction

    Get PDF
    The three-dimensional (3D) metric reconstruction of a scene from two-dimensional images is a fundamental problem in Computer Vision. The major bottleneck in the process of retrieving such structure lies in the task of recovering the camera parameters. These parameters can be calculated either through a pattern-based calibration procedure, which requires an accurate knowledge of the scene, or using a more flexible approach, known as camera autocalibration, which exploits point correspondences across images. While pattern-based calibration requires the presence of a calibration object, autocalibration constraints are often cast into nonlinear optimization problems which are often sensitive to both image noise and initialization. In addition, autocalibration fails for some particular motions of the camera. To overcome these problems, we propose to combine scene and autocalibration constraints and address in this thesis (a) the problem of extracting geometric information of the scene from uncalibrated images, (b) the problem of obtaining a robust estimate of the affine calibration of the camera, and (c) the problem of upgrading and refining the affine calibration into a metric one. In particular, we propose a method for identifying the major planar structures in a scene from images and another method to recognize parallel pairs of planes whenever these are available. The identified parallel planes are then used to obtain a robust estimate of both the affine and metric 3D structure of the scene without resorting to the traditional error prone calculation of vanishing points. We also propose a refinement method which, unlike existing ones, is capable of simultaneously incorporating plane parallelism and perpendicularity constraints in the autocalibration process. Our experiments demonstrate that the proposed methods are robust to image noise and provide satisfactory results
    corecore