152 research outputs found

    Enhanced Tracking Aerial Image by Applying Frame Extraction Technique

    Get PDF
    An image registration method is introduced that is capable of registering images from different views of a 3-D scene in the presence of occlusion. The proposed method is capable of withstanding considerable occlusion and homogeneous areas in images. The only requirement of the method is for the ground to be locally flat and sufficient ground cover be visible in the frames being registered. With help of fusion technique we solve the problem of blur images. In previous project sometime object recognition is not possible they do not show appropriate area, path and location. So with the help of object recognition we show the appropriate location, path and area. Then it captured the motion images, static images, video and CCTV footage also. Because of occlusion sometime result not get correct or sometime problems are occurred but with the help of techniques solve the problem of occlusion. This method is applicable for the various investigation departments. For the purpose of tracking such as smuggling or any unwanted operations which are apply or performed by illegally. Various types of technique are applied for performing the tracking operation. That technique return the correct result according to object tracking. Camera is not supported this type of operation because they do not return the clear image result. So apply the drone and aircraft for capturing the long distance or multiview images

    Matching Local Invariant Features with Contextual Information : an Experimental Evaluation

    Get PDF
    The main advantage of using local invariant features is their local character which yields robustness to occlusion and varying background. Therefore, local features have proved to be a powerful tool for finding correspondences between images, and have been employed in many applications. However, the local character limits the descriptive capability of features descriptors, and local features fail to resolve ambiguities that can occur when an image shows multiple similar regions. Considering some global information will clearly help to achieve better performances. The question is which information to use and how to use it. Context can be used to enrich the description of the features, or used in the matching step to filter out mismatches. In this paper, we compare different recent methods which use context for matching and show that better results are obtained if contextual information is used during the matching process. We evaluate the methods in two applications: wide baseline matching and object recognition, and it appears that a relaxation based approach gives the best results

    Automatic visual recognition using parallel machines

    Get PDF
    Invariant features and quick matching algorithms are two major concerns in the area of automatic visual recognition. The former reduces the size of an established model database, and the latter shortens the computation time. This dissertation, will discussed both line invariants under perspective projection and parallel implementation of a dynamic programming technique for shape recognition. The feasibility of using parallel machines can be demonstrated through the dramatically reduced time complexity. In this dissertation, our algorithms are implemented on the AP1000 MIMD parallel machines. For processing an object with a features, the time complexity of the proposed parallel algorithm is O(n), while that of a uniprocessor is O(n2). The two applications, one for shape matching and the other for chain-code extraction, are used in order to demonstrate the usefulness of our methods. Invariants from four general lines under perspective projection are also discussed in here. In contrast to the approach which uses the epipolar geometry, we investigate the invariants under isotropy subgroups. Theoretically speaking, two independent invariants can be found for four general lines in 3D space. In practice, we show how to obtain these two invariants from the projective images of four general lines without the need of camera calibration. A projective invariant recognition system based on a hypothesis-generation-testing scheme is run on the hypercube parallel architecture. Object recognition is achieved by matching the scene projective invariants to the model projective invariants, called transfer. Then a hypothesis-generation-testing scheme is implemented on the hypercube parallel architecture

    A single-lobe photometric stereo approach for heterogeneous material

    Get PDF
    Shape from shading with multiple light sources is an active research area, and a diverse range of approaches have been proposed in recent decades. However, devising a robust reconstruction technique still remains a challenging goal, as the image acquisition process is highly nonlinear. Recent Photometric Stereo variants rely on simplifying assumptions in order to make the problem solvable: light propagation is still commonly assumed to be uniform, and the Bidirectional Reflectance Distribution Function is assumed to be diffuse, with limited interest for specular materials. In this work, we introduce a well-posed formulation based on partial differential equations (PDEs) for a unified reflectance function that can model both diffuse and specular reflections. We base our derivation on ratio of images, which makes the model independent from photometric invariants and yields a well-posed differential problem based on a system of quasi-linear PDEs with discontinuous coefficients. In addition, we directly solve a differential problem for the unknown depth, thus avoiding the intermediate step of approximating the normal field. A variational approach is presented ensuring robustness to noise and outliers (such as black shadows), and this is confirmed with a wide range of experiments on both synthetic and real data, where we compare favorably to the state of the art.Roberto Mecca is a Marie Curie fellow of the “Istituto Nazionale di Alta Matematica” (Italy) for a project shared with University of Cambridge, Department of Engineering and the Department of Mathematics, University of Bologna

    Robust feature matching across widely separated color images

    Full text link

    Geometric and photometric affine invariant image registration

    Get PDF
    This thesis aims to present a solution to the correspondence problem for the registration of wide-baseline images taken from uncalibrated cameras. We propose an affine invariant descriptor that combines the geometry and photometry of the scene to find correspondences between both views. The geometric affine invariant component of the descriptor is based on the affine arc-length metric, whereas the photometry is analysed by invariant colour moments. A graph structure represents the spatial distribution of the primitive features; i.e. nodes correspond to detected high-curvature points, whereas arcs represent connectivities by extracted contours. After matching, we refine the search for correspondences by using a maximum likelihood robust algorithm. We have evaluated the system over synthetic and real data. The method is endemic to propagation of errors introduced by approximations in the system.BAE SystemsSelex Sensors and Airborne System

    Multimodal Three Dimensional Scene Reconstruction, The Gaussian Fields Framework

    Get PDF
    The focus of this research is on building 3D representations of real world scenes and objects using different imaging sensors. Primarily range acquisition devices (such as laser scanners and stereo systems) that allow the recovery of 3D geometry, and multi-spectral image sequences including visual and thermal IR images that provide additional scene characteristics. The crucial technical challenge that we addressed is the automatic point-sets registration task. In this context our main contribution is the development of an optimization-based method at the core of which lies a unified criterion that solves simultaneously for the dense point correspondence and transformation recovery problems. The new criterion has a straightforward expression in terms of the datasets and the alignment parameters and was used primarily for 3D rigid registration of point-sets. However it proved also useful for feature-based multimodal image alignment. We derived our method from simple Boolean matching principles by approximation and relaxation. One of the main advantages of the proposed approach, as compared to the widely used class of Iterative Closest Point (ICP) algorithms, is convexity in the neighborhood of the registration parameters and continuous differentiability, allowing for the use of standard gradient-based optimization techniques. Physically the criterion is interpreted in terms of a Gaussian Force Field exerted by one point-set on the other. Such formulation proved useful for controlling and increasing the region of convergence, and hence allowing for more autonomy in correspondence tasks. Furthermore, the criterion can be computed with linear complexity using recently developed Fast Gauss Transform numerical techniques. In addition, we also introduced a new local feature descriptor that was derived from visual saliency principles and which enhanced significantly the performance of the registration algorithm. The resulting technique was subjected to a thorough experimental analysis that highlighted its strength and showed its limitations. Our current applications are in the field of 3D modeling for inspection, surveillance, and biometrics. However, since this matching framework can be applied to any type of data, that can be represented as N-dimensional point-sets, the scope of the method is shown to reach many more pattern analysis applications

    The Application and Analysis of Automated Triangulation of Video Imagery by Successive Relative Orientation

    Get PDF
    The purpose of this thesis is the analysis and evaluation of methods to orient a strip of images using an automated approach. Automatic orientation of strips of video frame imagery would facilitate the construction of three dimensional models with less demand on a human operator for tedious measurement. Often one has no control points, so only relative orientation is possible. The relative orientation process gives camera parameters such as attitudes and selected baseline components and it can be implemented by using either collinearity or coplanarity equations. To automate the point selection, the pass and/or tie points were detected by the Colored Harris Laplace Corner detector along a strip of images and they were matched by cross correlation across multiple scales. However, the matched points from cross correlation still include the outliers. Therefore, the Random Sample Consensus (RANSAC) method with the essential matrix was applied to detect only inliers of point pairs. Then relative orientation was performed for this series of video imagery using the coplanarity condition. However, there is no guarantee that three rays for a single point will intersect in a single point. Therefore for all photos, subsequent to the first one, the scale restraint equation was applied along with the coplanarity equation to ensure these three rays\u27 intersection. At this point, the Kalman Filtering algorithm was introduced to address the problem of uncompensated systematic error accumulation. Kalman Filtering is more parsimonious of computing effort than Simultaneous Least Squares, and it gives superior results compared with Cantilever Least Squares models by including trajectory information. To conform with accepted photogrammetric standards, the camera was calibrated with selected frames extracted from the video stream. For the calibration, minimal constraints are applied. Coplanarity and scale restraint equations in relative orientation were also used for initial approximation for the nonlinear bundle block adjustment to accomplish camera calibration. For calibration imagery, the main building of the bell tower at the University of Texas was used as an object because it has lots of three dimensional features with an open view and the data could be acquired at infinity focus distance. Another two sets of calibrations were implemented with targets placed inside of a laboratory room. The automated relative orientation experiment was carried out with one terrestrial, one aerial and another simulated strip. The real data was acquired by a high definition camcorder. Both terrestrial and aerial data were acquired at the Purdue University campus. The terrestrial data was acquired from a moving vehicle. The aerial data of the Purdue University campus was acquired from a Cessna aircraft. The results from the aerial and simulation cases were evaluated by control points. The three estimation strategies are stripwise Simultaneous, Kalman Filtering and Cantilever, all employing coplanarity equations. For the aerial and simulation case, an absolute comparison was made between the three experimental techniques and the bundle block adjustment. In all cases, the relative solutions were transformed to ground coordinates by a rigid body, 7-parameter transformation. In retrospect, the aerial case was too short (8 photographs) to demonstrate the compensation of strip formation errors. Therefore a simulated strip (30 photographs) was used for this purpose. Absolute accuracy for the aerial and simulation approaches was evaluated by ground control points. Precision of each approach was evaluated by error ellipsoid at each intersected point. Also memory occupancy for each approach was measured to compare resource requirements for each approach. When considering computing resources and absolute accuracy, the Kalman Filter solution is superior compared with the Simultaneous and the Cantilever methods

    Image Matching based on Curvilinear Regions

    Get PDF

    A 3-D Vision-Based Man–Machine Interface For Hand-Controlled Telerobot

    Full text link
    corecore