609 research outputs found
Ranging of Aircraft Using Wide-baseline Stereopsis
The purpose of this research was to investigate the efficacy of wide-baseline stereopsis as a method of ranging aircraft, specifically as a possible sense-and-avoid solution in Unmanned Aerial Systems. Two studies were performed: the first was an experimental pilot study to examine the ability of humans to range in-flight aircraft and the second a wide-baseline study of stereopsis to range in-flight aircraft using a baseline 14.32 meters and two 640 x 480 pixel charge coupled device camera. An experimental research design was used in both studies. Humans in the pilot study ranged aircraft with a mean absolute error of 50.34%. The wide-baseline stereo system ranged aircraft within 2 kilometers with a mean absolute error of 17.62%. A t-test was performed and there was a significant difference between the mean absolute error of the humans in the pilot study and the wide-baseline stereo system. The results suggest that the wide-baseline system is more consistent as well as more accurate than humans
Design and Implementation of a Precision Three-Dimensional Binocular Image Tracker for Departing Aircraft
Abstract
This dissertation presents the result of the conceptualization, design and implementation of a new, novel and low cost Binocular Tracking System for departing Aircraft. This system is a unique design due to the commercial off-the-shelf (COTS) components used and the distinct modular algorithms developed for the implementation of tracking aircraft.
Recent economic pressures and changing Federal Aviation Administration (FAA) regulations have raised serious concern that obstacle clearance requirements are not being met on commercial aircraft departure. Moreover, local airport procedures do not always align with the requirements for Terminal Instrument Procedures (TERPs) established by the FAA. The flight track data collected by this system is being used by the FAA to assess the magnitude of the problem and determine steps to align airport and TERPs procedures, while also mitigating obstacle clearance violations and thus the risk of departing aircraft encountering an obstacle.
Each of the binocular tracking systems uses three cameras. One camera is directed towards the runway, initializes the tracking algorithms, and identifies the type of aircraft. The other two cameras form the binocular tracking system. These dual cameras are aligned in a vergent stereo configuration across the departure path to provide the maximum overlap in the field of view to produce a superior depth resolution.
The modular tracking algorithms allow a large volume of tracking data to be accumulated that provides the FAA information on departing aircraft. This dissertation discusses the details of the binocular tracking system’s conceptualization, design, and implementation, including hardware and software development of the tracking system. This dissertation also includes system setup, data collection, processing and error analysis of the system’s performance in the field
A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm
This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality
This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore
On the popularization of digital close-range photogrammetry: a handbook for new users.
Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Γεωπληροφορική
Colour and spatial pattern discrimination in human vision
Imperial Users onl
A Depth-Based Computer Vision Approach to Unmanned Aircraft System Landing with Optimal Positioning
High traffic congestion in cities can lead to difficulties in delivering appropriate aid to people in need of emergency services. Developing an autonomous aerial medical evacuation system with the required size to facilitate the need can allow for the mitigation of the constraint. The aerial system must be capable of vertical takeoff and landing to reach highly conjected areas and areas where traditional aircraft cannot access. In general, the most challenging limitation within any proposed solution is the landing sequence. There have been several techniques developed over the years to land aircraft autonomously; however, very little attention has been scoped to operate strictly within highly congested urban-type environments. The goal of this research is to develop a possible solution to achieve autonomous landing based on computer vision-capture systems. For example, by utilizing modern computer vision approaches involving depth estimation through binocular stereo computer vision, a depth map can be developed. If the vision system is mounted to the bottom of an autonomous aerial system, it can represent the area below the aircraft and determine a possible landing zone. In this work, neural networks are used to isolate the ground via the computer vision height map. Then out of the entire visible ground area, a potential landing position can be estimated. An optimization routine is then developed to identify the most optimal landing position within the visible area. The optimization routine identifies the largest identifiable open area near the desired landing location. Web cameras were utilized and processed on a desktop to form a basis for the computer vision system. The algorithms were tested and verified using a simulation effort proving the feasibility of the approach. In addition, the system was tested on a scaled down city scene and was able to determine an optimal landing zone
The Mark 3 Haploscope
A computer-operated binocular vision testing device was developed as one part of a system designed for NASA to evaluate the visual function of astronauts during spaceflight. This particular device, called the Mark 3 Haploscope, employs semi-automated psychophysical test procedures to measure visual acuity, stereopsis, phoria, fixation disparity, refractive state and accommodation/convergence relationships. Test procedures are self-administered and can be used repeatedly without subject memorization. The Haploscope was designed as one module of the complete NASA Vision Testing System. However, it is capable of stand-alone operation. Moreover, the compactness and portability of the Haploscope make possible its use in a broad variety of testing environments
Automatic Real-Time Pose Estimation of Machinery from Images
The automatic positioning of machines in a large number of application areas is an important aspect of automation. Today, this is often done using classic geodetic sensors such as Global Navigation Satellite Systems (GNSS) and robotic total stations. In this work, a stereo camera system was developed that localizes a machine at high frequency and serves as an alternative to the previously mentioned sensors. For this purpose, algorithms were developed that detect active markers on the machine in a stereo image pair, find stereo point correspondences, and estimate the pose of the machine from these. Theoretical influences and accuracies for different systems were estimated with a Monte Carlo simulation, on the basis of which the stereo camera system was designed. Field measurements were used to evaluate the actual achievable accuracies and the robustness of the prototype system. The comparison is present with reference measurements with a laser tracker. The estimated object pose achieved accuracies higher than [Formula: see text] with the translation components and accuracies higher than [Formula: see text] with the rotation components. As a result, 3D point accuracies higher than [Formula: see text] were achieved by the machine. For the first time, a prototype could be developed that represents an alternative, powerful image-based localization method for machines to the classical geodetic sensors
- …