3 research outputs found
Machine-Vision Aids for Improved Flight Operations
The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed
A real-time low-cost vision sensor for robotic bin picking
This thesis presents an integrated approach of a vision sensor for bin picking. The vision system that has been devised consists of three major components. The first addresses the implementation of a bifocal range sensor which estimates the depth by measuring the relative blurring between two images captured with different focal settings. A key element in the success of this approach is that it overcomes some of the limitations that were associated with other related implementations and the experimental results indicate that the precision offered by the sensor discussed in this thesis is precise enough for a large variety of industrial applications. The second component deals with the implementation of an edge-based segmentation technique which is applied in order to detect the boundaries of the objects that define the scene. An important issue related to this segmentation technique consists of minimising the errors in the edge detected output, an operation that is carried out by analysing the information associated with the singular edge points. The last component addresses the object recognition and pose estimation using the information resulting from the application of the segmentation algorithm. The recognition stage consists of matching the primitives derived from the scene regions, while the pose estimation is addressed using an appearance-based approach augmented with a range data analysis. The developed system is suitable for real-time operation and in order to demonstrate the validity of the proposed approach it has been examined under varying real-world scenes
Recommended from our members
Automatic recognition of three dimensional planar objects by Hough transform type operations
This thesis describes an investigation into the recognition from range data of three dimensional objects with plane surfaces. In it a Hough transform type operation is used to identify objects. This is adapted for three dimensions and uses a voting scheme to identify objects.
First, all available edges of the object present in the scene are extracted. Then, two edges of the object and two lines of a model are taken at a time. These are pruned and potential matching lines are selected. Next, geometric transformations necessary to take them into a fixed position in space are calculated. Matrices resulting from successful matches are computed and stored. The presence of an object similar to a model results in the generation of the same matrices. Recognition is achieved by choosing the model with the highest occurring matrix.
In order to extract edges a vision system is designed and set up. In it a stripe of light generated from the projector together with a camera is employed. A procedure to calibrate the system and extract three dimensional information is devised. Then objects are scanned and from the images taken, coordinates of edge points are computed. Next, edge points are linked and edges of the object are extracted and a recognition algorithm is applied.
The system is tested on objects with varying complexity. Recognition is performed in two different categories. First objects are placed on a specific face. Then they are recognised in arbitrary position and orientation. For each object the results and implications of the recognition algorithm, are investigated. A modified version of the recognition algorithm with two and three connected lines is tested and compared with previous experiments