7 research outputs found

    Automatic Vehicle Trajectory Extraction by Aerial Remote Sensing

    Get PDF
    Research in road users’ behaviour typically depends on detailed observational data availability, particularly if the interest is in driving behaviour modelling. Among this type of data, vehicle trajectories are an important source of information for traffic flow theory, driving behaviour modelling, innovation in traffic management and safety and environmental studies. Recent developments in sensing technologies and image processing algorithms reduced the resources (time and costs) required for detailed traffic data collection, promoting the feasibility of site-based and vehicle-based naturalistic driving observation. For testing the core models of a traffic microsimulation application for safety assessment, vehicle trajectories were collected by remote sensing on a typical Portuguese suburban motorway. Multiple short flights over a stretch of an urban motorway allowed for the collection of several partial vehicle trajectories. In this paper the technical details of each step of the methodology used is presented: image collection, image processing, vehicle identification and vehicle tracking. To collect the images, a high-resolution camera was mounted on an aircraft's gyroscopic platform. The camera was connected to a DGPS for extraction of the camera position and allowed the collection of high resolution images at a low frame rate of 2s. After generic image orthorrectification using the flight details and the terrain model, computer vision techniques were used for fine rectification: the scale-invariant feature transform algorithm was used for detection and description of image features, and the random sample consensus algorithm for feature matching. Vehicle detection was carried out by median-based background subtraction. After the computation of the detected foreground and the shadow detection using a spectral ratio technique, region segmentation was used to identify candidates for vehicle positions. Finally, vehicles were tracked using a k- shortest disjoints paths algorithm. This approach allows for the optimization of an entire set of trajectories against all possible position candidates using motion-based optimization. Besides the importance of a new trajectory dataset that allows the development of new behavioural models and the validation of existing ones, this paper also describes the application of state-of-the-art algorithms and methods that significantly minimize the resources needed for such data collection. Keywords: Vehicle trajectories extraction, Driver behaviour, Remote sensin

    A ROBUST GA/KNN BASED HYPOTHESIS VERIFICATION SYSTEM FOR VEHICLE DETECTION

    Get PDF
    ABSTRACT Vehicle detection is an important issue in driver assistance systems and self-guided vehicles that include

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    Motion tracking on embedded systems: vision-based vehicle tracking using image alignment with symmetrical function.

    Get PDF
    Cheung, Lap Chi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 91-95).Abstracts in English and Chinese.Chapter 1. --- INTRODUCTION --- p.1Chapter 1.1. --- Background --- p.1Chapter 1.1.1. --- Introduction to Intelligent Vehicle --- p.1Chapter 1.1.2. --- Typical Vehicle Tracking Systems for Rear-end Collision Avoidance --- p.2Chapter 1.1.3. --- Passive VS Active Vehicle Tracking --- p.3Chapter 1.1.4. --- Vision-based Vehicle Tracking Systems --- p.4Chapter 1.1.5. --- Characteristics of Computing Devices on Vehicles --- p.5Chapter 1.2. --- Motivation and Objectives --- p.6Chapter 1.3. --- Major Contributions --- p.7Chapter 1.3.1. --- A 3-phase Vision-based Vehicle Tracking Framework --- p.7Chapter 1.3.2. --- Camera-to-vehicle Distance Measurement by Single Camera --- p.9Chapter 1.3.3. --- Real Time Vehicle Detection --- p.10Chapter 1.3.4. --- Real Time Vehicle Tracking using Simplified Image Alignment --- p.10Chapter 1.4. --- Evaluation Platform --- p.11Chapter 1.5. --- Thesis Organization --- p.11Chapter 2. --- RELATED WORK --- p.13Chapter 2.1. --- Stereo-based Vehicle Tracking --- p.13Chapter 2.2. --- Motion-based Vehicle Tracking --- p.16Chapter 2.3. --- Knowledge-based Vehicle Tracking --- p.18Chapter 2.4. --- Commercial Systems --- p.19Chapter 3. --- 3-PHASE VISION-BASED VEHICLE TRACKING FRAMEWORK --- p.22Chapter 3.1. --- Introduction to the 3-phase Framework --- p.22Chapter 3.2. --- Vehicle Detection --- p.23Chapter 3.2.1. --- Overview of Vehicle Detection --- p.23Chapter 3.2.2. --- Locating the Vehicle Center - Symmetrical Measurement --- p.25Chapter 3.2.3. --- Locating the Vehicle Roof and Bottom --- p.28Chapter 3.2.4. --- Locating the Vehicle Sides - Over-complete Haar Transform --- p.30Chapter 3.3. --- Vehicle Template Tracking Image Alignment --- p.37Chapter 3.3.5. --- Overview of Vehicle Template Tracking --- p.37Chapter 3.3.6. --- Goal of Image Alignment --- p.41Chapter 3.3.7. --- Alternative Image Alignment - Compositional Image Alignment --- p.42Chapter 3.3.8. --- Efficient Image Alignment - Inverse Compositional Algorithm --- p.43Chapter 3.4. --- Vehicle Template Update --- p.46Chapter 3.4.1. --- Situation of Vehicle lost --- p.46Chapter 3.4.2. --- Template Filling by Updating the positions of Vehicle Features --- p.48Chapter 3.5. --- Experiments and Discussions --- p.49Chapter 3.5. 1. --- Experiment Setup --- p.49Chapter 3.5.2. --- Successful Tracking Percentage --- p.50Chapter 3.6. --- Comparing with other tracking methodologies --- p.52Chapter 3.6.1. --- 1-phase Vision-based Vehicle Tracking --- p.52Chapter 3.6.2. --- Image Correlation --- p.54Chapter 3.6.3. --- Continuously Adaptive Mean Shift --- p.58Chapter 4. --- CAMERA TO-VEHICLE DISTANCE MEASUREMENT BY SINGLE CAMERA --- p.61Chapter 4.1 --- The Principle of Law of Perspective --- p.61Chapter 4.2. --- Distance Measurement by Single Camera --- p.62Chapter 5. --- REAL TIME VEHICLE DETECTION --- p.66Chapter 5.1. --- Introduction --- p.66Chapter 5.2. --- Timing Analysis of Vehicle Detection --- p.66Chapter 5.3. --- Symmetrical Measurement Optimization --- p.67Chapter 5.3.1. --- Diminished Gradient Image for Symmetrical Measurement --- p.67Chapter 5.3.2. --- Replacing Division by Multiplication Operations --- p.71Chapter 5.4. --- Over-complete Haar Transform Optimization --- p.73Chapter 5.4.1. --- Characteristics of Over-complete Haar Transform --- p.75Chapter 5.4.2. --- Pre-compntation of Haar block --- p.74Chapter 5.5. --- Summary --- p.77Chapter 6. --- REAL TIME VEHICLE TRACKING USING SIMPLIFIED IMAGE ALIGNMENT --- p.78Chapter 6.1. --- Introduction --- p.78Chapter 6.2. --- Timing Analysis of Original Image Alignment --- p.78Chapter 6.3. --- Simplified Image Alignment --- p.80Chapter 6.3.1. --- Reducing the Number of Parameters in Affine Transformation --- p.80Chapter 6.3.2. --- Size Reduction of Image A ligmnent Matrixes --- p.85Chapter 6.4. --- Experiments and Discussions --- p.85Chapter 6.4.1. --- Successful Tracking Percentage --- p.86Chapter 6.4.2. --- Timing Improvement --- p.87Chapter 7. --- CONCLUSIONS --- p.89Chapter 8. --- BIBLIOGRAPHY --- p.9

    Driver Behavior Analysis Based on Real On-Road Driving Data in the Design of Advanced Driving Assistance Systems

    Get PDF
    The number of vehicles on the roads increases every day. According to the National Highway Traffic Safety Administration (NHTSA), the overwhelming majority of serious crashes (over 94 percent) are caused by human error. The broad aim of this research is to develop a driver behavior model using real on-road data in the design of Advanced Driving Assistance Systems (ADASs). For several decades, these systems have been a focus of many researchers and vehicle manufacturers in order to increase vehicle and road safety and assist drivers in different driving situations. Some studies have concentrated on drivers as the main actor in most driving circumstances. The way a driver monitors the traffic environment partially indicates the level of driver awareness. As an objective, we carry out a quantitative and qualitative analysis of driver behavior to identify the relationship between a driver’s intention and his/her actions. The RoadLAB project developed an instrumented vehicle equipped with On-Board Diagnostic systems (OBD-II), a stereo imaging system, and a non-contact eye tracker system to record some synchronized driving data of the driver cephalo-ocular behavior, the vehicle itself, and traffic environment. We analyze several behavioral features of the drivers to realize the potential relevant relationship between driver behavior and the anticipation of the next driver maneuver as well as to reach a better understanding of driver behavior while in the act of driving. Moreover, we detect and classify road lanes in the urban and suburban areas as they provide contextual information. Our experimental results show that our proposed models reached the F1 score of 84% and the accuracy of 94% for driver maneuver prediction and lane type classification respectively

    A new classification approach based on geometrical model for human detection in images

    Get PDF
    In recent years, object detection and classification has gained more attention, thus, there are several human object detection algorithms being used to locate and recognize human objects in images. The research of image processing and analysing based on human shape is a hot topic due to its wide applicability in real applications. In this research, we present a new shape-based classification approach to categorise the detected object as human or non-human in images. The classification in this approach is based on applying a geometrical model which contains a set of parameters related to the object’s upper portion. Based on the result of these geometric parameters, our approach can simply classify the detected object as human or non-human. In general, the classification process of this new approach is based on generating a geometrical model by observing unique geometrical relations between the upper portion shape points (neck, head, shoulders) of humans, this observation is based on analysis of the change in the histogram of the x values coordinates for human upper portion shape. To present the changing of X coordinate values we have used histograms with mathematical smoothing functions to avoid small angles, as the result we observed four parameters for human objects to be used in building the classifier, by applying the four parameters of the geometrical model and based on the four parameters results, our classification approach can classify the human object from another object. The proposed approach has been tested and compared with some of the machine learning approaches such as Artificial Neural Networks (ANN), Support Vector Machine (SVM) Model, and a famous type of decision tree called Random Forest, by using 358 different images for several objects obtained from INRIA dataset (set of human and non-human as an object in digital images). From the comparison and testing result between the proposed approach and the machine learning approaches in term of accuracy performance, we indicate that the proposed approach achieved the highest accuracy rate (93.85%), with the lowest miss detection rate (11.245%) and false discovery rate (9.34%). The result achieved from the testing and comparison shows the efficiency of this presented approach
    corecore