404 research outputs found

    Design and Implementation of a Precision Three-Dimensional Binocular Image Tracker for Departing Aircraft

    Get PDF
    Abstract This dissertation presents the result of the conceptualization, design and implementation of a new, novel and low cost Binocular Tracking System for departing Aircraft. This system is a unique design due to the commercial off-the-shelf (COTS) components used and the distinct modular algorithms developed for the implementation of tracking aircraft. Recent economic pressures and changing Federal Aviation Administration (FAA) regulations have raised serious concern that obstacle clearance requirements are not being met on commercial aircraft departure. Moreover, local airport procedures do not always align with the requirements for Terminal Instrument Procedures (TERPs) established by the FAA. The flight track data collected by this system is being used by the FAA to assess the magnitude of the problem and determine steps to align airport and TERPs procedures, while also mitigating obstacle clearance violations and thus the risk of departing aircraft encountering an obstacle. Each of the binocular tracking systems uses three cameras. One camera is directed towards the runway, initializes the tracking algorithms, and identifies the type of aircraft. The other two cameras form the binocular tracking system. These dual cameras are aligned in a vergent stereo configuration across the departure path to provide the maximum overlap in the field of view to produce a superior depth resolution. The modular tracking algorithms allow a large volume of tracking data to be accumulated that provides the FAA information on departing aircraft. This dissertation discusses the details of the binocular tracking system’s conceptualization, design, and implementation, including hardware and software development of the tracking system. This dissertation also includes system setup, data collection, processing and error analysis of the system’s performance in the field

    Stereoscopic vision in vehicle navigation.

    Get PDF
    Traffic sign (TS) detection and tracking is one of the main tasks of an autonomous vehicle which is addressed in the field of computer vision. An autonomous vehicle must have vision based recognition of the road to follow the rules like every other vehicle on the road. Besides, TS detection and tracking can be used to give feedbacks to the driver. This can significantly increase safety in making driving decisions. For a successful TS detection and tracking changes in weather and lighting conditions should be considered. Also, the camera is in motion, which results in image distortion and motion blur. In this work a fast and robust method is proposed for tracking the stop signs in videos taken with stereoscopic cameras that are mounted on the car. Using camera parameters and the detected sign, the distance between the stop sign and the vehicle is calculated. This calculated distance can be widely used in building visual driver-assistance systems

    Image processing techniques for the perception of automotive environments with applications to pedestrian detection

    Get PDF
    The experience of the ARGO Project: it started in 1996 at the University of Parma, Italy, based on the previous experience within the European PROMETHEUS Project. In 1997 the ARGO prototype vehicle was set up with sensors and actuators, and the first version of the GOLD software system – able to locate one lane marking and generic obstacles on the vehicle’s path – was installed. In June 1998 the vehicle underwent a major test (the MilleMiglia in Automatico, a 2000 km tour on Italian highways) in order to test the complete equipment

    Data Fusion of Laser Range Finder and Video Camera

    Get PDF
    For this project, a technique of fusing the data from sensors are developed in order to detect, track and classify in a static background environment. The proposed method is to utilize a single video camera and a laser range finder to determine the range of a generally specified targets or objects and classification of those particular targets. The module aims to acquire or detect objects or obstacles and provide the distance from the module to the target in real-time application using real live video. The proposed method to achieve the objective is using MATLAB to perform data fusion of the data collected from laser range finder and video camera. Background subtraction is used in this project to perform object detection

    Vehicle Tracking and Motion Estimation Based on Stereo Vision Sequences

    Get PDF
    In this dissertation, a novel approach for estimating trajectories of road vehicles such as cars, vans, or motorbikes, based on stereo image sequences is presented. Moving objects are detected and reliably tracked in real-time from within a moving car. The resulting information on the pose and motion state of other moving objects with respect to the own vehicle is an essential basis for future driver assistance and safety systems, e.g., for collision prediction. The focus of this contribution is on oncoming traffic, while most existing work in the literature addresses tracking the lead vehicle. The overall approach is generic and scalable to a variety of traffic scenes including inner city, country road, and highway scenarios. A considerable part of this thesis addresses oncoming traffic at urban intersections. The parameters to be estimated include the 3D position and orientation of an object relative to the ego-vehicle, as well as the object's shape, dimension, velocity, acceleration and the rotational velocity (yaw rate). The key idea is to derive these parameters from a set of tracked 3D points on the object's surface, which are registered to a time-consistent object coordinate system, by means of an extended Kalman filter. Combining the rigid 3D point cloud model with the dynamic model of a vehicle is one main contribution of this thesis. Vehicle tracking at intersections requires covering a wide range of different object dynamics, since vehicles can turn quickly. Three different approaches for tracking objects during highly dynamic turn maneuvers up to extreme maneuvers such as skidding are presented and compared. These approaches allow for an online adaptation of the filter parameter values, overcoming manual parameter tuning depending on the dynamics of the tracked object in the scene. This is the second main contribution. Further issues include the introduction of two initialization methods, a robust outlier handling, a probabilistic approach for assigning new points to a tracked object, as well as mid-level fusion of the vision-based approach with a radar sensor. The overall system is systematically evaluated both on simulated and real-world data. The experimental results show the proposed system is able to accurately estimate the object pose and motion parameters in a variety of challenging situations, including night scenes, quick turn maneuvers, and partial occlusions. The limits of the system are also carefully investigated.In dieser Dissertation wird ein Ansatz zur Trajektorienschätzung von Straßenfahrzeugen (PKW, Lieferwagen, Motorräder,...) anhand von Stereo-Bildfolgen vorgestellt. Bewegte Objekte werden in Echtzeit aus einem fahrenden Auto heraus automatisch detektiert, vermessen und deren Bewegungszustand relativ zum eigenen Fahrzeug zuverlässig bestimmt. Die gewonnenen Informationen liefern einen entscheidenden Grundstein für zukünftige Fahrerassistenz- und Sicherheitssysteme im Automobilbereich, beispielsweise zur Kollisionsprädiktion. Während der Großteil der existierenden Literatur das Detektieren und Verfolgen vorausfahrender Fahrzeuge in Autobahnszenarien adressiert, setzt diese Arbeit einen Schwerpunkt auf den Gegenverkehr, speziell an städtischen Kreuzungen. Der Ansatz ist jedoch grundsätzlich generisch und skalierbar für eine Vielzahl an Verkehrssituationen (Innenstadt, Landstraße, Autobahn). Die zu schätzenden Parameter beinhalten die räumliche Lage des anderen Fahrzeugs relativ zum eigenen Fahrzeug, die Objekt-Geschwindigkeit und -Längsbeschleunigung, sowie die Rotationsgeschwindigkeit (Gierrate) des beobachteten Objektes. Zusätzlich werden die Objektabmaße sowie die Objektform rekonstruiert. Die Grundidee ist es, diese Parameter anhand der Transformation von beobachteten 3D Punkten, welche eine ortsfeste Position auf der Objektoberfläche besitzen, mittels eines rekursiven Schätzers (Kalman Filter) zu bestimmen. Ein wesentlicher Beitrag dieser Arbeit liegt in der Kombination des Starrkörpermodells der Punktewolke mit einem Fahrzeugbewegungsmodell. An Kreuzungen können sehr unterschiedliche Dynamiken auftreten, von einer Geradeausfahrt mit konstanter Geschwindigkeit bis hin zum raschen Abbiegen. Um eine manuelle Parameteradaption abhängig von der jeweiligen Szene zu vermeiden, werden drei verschiedene Ansätze zur automatisierten Anpassung der Filterparameter an die vorliegende Situation vorgestellt und verglichen. Dies stellt den zweiten Hauptbeitrag der Arbeit dar. Weitere wichtige Beiträge sind zwei alternative Initialisierungsmethoden, eine robuste Ausreißerbehandlung, ein probabilistischer Ansatz zur Zuordnung neuer Objektpunkte, sowie die Fusion des bildbasierten Verfahrens mit einem Radar-Sensor. Das Gesamtsystem wird im Rahmen dieser Arbeit systematisch anhand von simulierten und realen Straßenverkehrsszenen evaluiert. Die Ergebnisse zeigen, dass das vorgestellte Verfahren in der Lage ist, die unbekannten Objektparameter auch unter schwierigen Umgebungsbedingungen, beispielsweise bei Nacht, schnellen Abbiegemanövern oder unter Teilverdeckungen, sehr präzise zu schätzen. Die Grenzen des Systems werden ebenfalls sorgfältig untersucht

    Comprehensive review of vision-based fall detection systems

    Get PDF
    Vision-based fall detection systems have experienced fast development over the last years. To determine the course of its evolution and help new researchers, the main audience of this paper, a comprehensive revision of all published articles in the main scientific databases regarding this area during the last five years has been made. After a selection process, detailed in the Materials and Methods Section, eighty-one systems were thoroughly reviewed. Their characterization and classification techniques were analyzed and categorized. Their performance data were also studied, and comparisons were made to determine which classifying methods best work in this field. The evolution of artificial vision technology, very positively influenced by the incorporation of artificial neural networks, has allowed fall characterization to become more resistant to noise resultant from illumination phenomena or occlusion. The classification has also taken advantage of these networks, and the field starts using robots to make these systems mobile. However, datasets used to train them lack real-world data, raising doubts about their performances facing real elderly falls. In addition, there is no evidence of strong connections between the elderly and the communities of researchers

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure
    • …
    corecore