20 research outputs found

    A System for the Acquisition and Analysis of Image Sequences to Model Longitudinal Driving Behavior

    No full text
    Traffic causes important problems in many societies. Considerable amounts of energy, money and time are wasted in traffic jams and even more important are car accidents and casualties in traffic (more than 40,000 deaths per year in the USA (Hitti (2005)) and 791 deaths in 2007 in the Netherlands (van Verkeer en Waterstaat (2008))). To alleviate the problems, traffic is continuously and intensively controlled and studied by authorities and researchers. The results of ongoing research on traffic lead to the development of new infrastructure facilities and rules. The behavior of drivers in different traffic situations, however, is still not sufficiently known. The driving behavior consists in the reaction of a driver to the action of surrounding vehicles. The stimuli can be the distance of the follower from the leader and the speed of the follower and leaders when the leaders change their behavior such as in changing lane and braking. As a result, the response of the follower to these actions may be predictable with stimuli-response models. This can be used, for example, to regulate traffic by advising a safe distance which the driver should keep when leading vehicles change their behavior. The findings on the driving dynamics can be used in dynamic traffic management and in advanced driver assistance systems such as adaptive cruise control. Dynamic traffic management controls the efficient use of infrastructure and predicts the effect of future changes on the infrastructure. Thus it allows more effective management and providing better traffic information such as in ramp metering, incident management, travel information (travel time, route recommendation). To get insight into drivers' behavior, it is, therefore, indispensable to reconstruct extended trajectories of observed vehicles for a long stretch of a freeway for a long time at high frequency under the same or nearly the same conditions, while satisfying positioning and speed accuracy requirements. Current systems for measuring the trajectories of each individual vehicle at any time are not sufficiently mature. This thesis describes a system which was designed and implemented for the collection of many long and detailed vehicle trajectories with the same or nearly the same conditions of weather, traffic, road and surroundings to improve stimuli-response models (longitudinal driving behavior). We evaluated possible platforms and sensors and a helicopter was chosen due to its widespread use for many applications, its flexibility and hovering properties. A camera with the resolution 1392x1040 and frame rate 15 was selected to record a sequence of images. The image sequence contains locations of vehicles as a function of time. We established that for our purpose the helicopter should hover over a freeway while looking in a nadir direction, with the camera mounted horizontally below the helicopter. The helicopter height is the main platform parameter affecting the characteristics of the recorded data: The increasing height implies to observe a longer stretch of a road is observed at higher flight altitude, but at the price of a larger ground sampling distance (GSD) value. For the camera the combination of CCD elements and the field of view (FOV) determines the GSD at a given height. A larger frame rate provides more details about vehicle trajectories. The system configuration, i.e. the set of camera and platform parameters, is determined taking also into account the ground slope and meet the given requirements. The proposed system acquires image sequences from a helicopter hovering over freeways. These image sequences contain both camera and vehicle motion. The vehicle motion is, therefore, contaminated by the camera motion. Current methods to remove the camera motion from an entire image sequence, i.e. performing image-sequence stabilization, require the determination of the scene position, which is a complicated and computationally expensive task. We therefore decided to develop a new procedure to stabilize the entire image sequence. We investigated in which cases corresponding pixels projected from a scene object to images in a sequence are related by the camera motion only without requiring the scene position. This relation is a so-called projective or homography. As a result, the relation between corresponding fixed points of two consecutive images and corresponding road area points in any two images is expressed by a homography. By knowing this homography, every given point in one image is transformed to the corresponding point in the other image. Only few reliable corresponding points are needed to establish each homography relation. Because of having a very large number of images from different areas, the automatic and robust extraction of corresponding points is required. The difficulties in finding correspondences are due to having pixels of moving objects, changing illumination conditions, image noise, existence of repeated patterns, scene elements with sparse structure, a large transformation which changes the perspective and scene occlusions. The robust registration methods are, therefore, investigated, designed and developed to handle these difficulties. Two different groups of methods were investigated for automatic and robust registration of two consecutive images. These methods were feature based and featureless. Two feature-based methods are introduced: KLT-RANSAC and SIFT-RANSAC with the difference being the procedure to find corresponding points between two consecutive images. In KLT, points are first extracted in one image and then tracked in the second image. In SIFT, the points are extracted from both images and then they are connected. These corresponding points are used to estimate parameters of the homography model between two consecutive images by RANSAC, which is a robust parameter estimator. In the featureless method, a DE-based registration procedure is designed and developed. Without explicitly extracting any corresponding points, this method uses all the image pixels. Parameters of the homography model between two consecutive images are obtained by minimizing the difference between one image and the transformed second image. We found out that the registration of the consecutive images is robust by any of the registration methods that we used. In contrast, the registration of the arbitrary images in the whole image area or only on the road area is not robust due to having a large number of wrong corresponding points. Consequently, a new framework was designed to stabilize the image sequence on the road area. This framework includes two steps: registration of two consecutive images and stabilization of the entire image sequence of the road area. After registration of two consecutive images, the registration of the entire sequence of images of the road area is done. The homography between two consecutive images is taken equal to the homography corresponding to the road surface. A product of this homography and the homography corresponding to the road surface between the previous image and a reference image (here the first image in the sequence), provides another homography, which registers the current image to the reference image on the road area. Such registration contains errors from both described homographies. As a result, the road areas are registered approximately due to small errors in the two homographies. By identifying the points in the road area of the reference image only, these points are matched in the transformed current image just around a very small search area. These corresponding points are used to calculate the homography parameters between the reference image and the transformed current image. The product of this homography by the approximate homography gives the homography corresponding to the road area of the current image and reference image. The same procedure is applied iteratively to the entire sequence. As a result, the road area in image sequence is registered. The image sequence is stabilized on the road area in an automatic, robust and reliable way, independently of differences in image content, traffic and environmental conditions. Besides, the new framework provides an error free sequential registration, without requiring computationally expensive bundle adjustment. The procedure is fast as well. No assumption is made on the camera motion or on the scene. After removing the camera motion from the entire sequence, vehicles are identified and tracked. The vehicles were identified by first distinguishing between background and vehicle pixel for each image by analyzing the gray-level temporal profile of each pixel. Background pixels are assumed to be discriminated by the higher frequency in a gray-level histogram. After identifying the vehicle and background pixels, clusters of vehicle pixels are grouped and indicated as a "blob". A blob and each pixel of it are then tracked using the optical flow method. These displacements are contaminated by errors which are removed by analyzing the histogram of the displacements. From the histogram, the most frequent value corresponds to the real vehicle displacement, which is the same for all pixels of a vehicle (a rigid object). The positions of the vehicles are updated after tracking them. The vehicles are identified in each image to detect new vehicles and update the boundaries of all vehicles. The vehicle extraction method detects many similar vehicles which contain only few details and with different motion, particularly vehicles having low speed and low contrast. Both vehicle extraction and tracking methods are sequential and only require very small amount of memory. The results are reliable due to the temporal connectivity in the vehicle extraction and spatio-temporal connectivity in the tracking method. Trajectories obtained with the system described above can be used to get insights into drivers' behavior towards improved predictive models. The system can be used for dynamic traffic management using the real time trajectory extraction by a hardware implementation of the procedure. If the real time extraction of trajectories is not an issue, trajectories can be improved by analyzing them and then give a feed back to the vehicle identification system. Integration of the results from spatio-temporal analysis can further improve the identification system.Remote SensingAerospace Engineerin

    Vehicle detection from an image sequence collected by a hovering helicopter

    No full text
    This paper addresses the problem of vehicle detection from an image sequence in difficult cases. Difficulties are notably caused by relatively small vehicles, vehicles that appear with low contrast or vehicles that drive at low speed. The image sequence considered here is recorded by a hovering helicopter and was stabilized prior to the vehicle detection step considered here. A practical algorithm is designed and implemented for this purpose of vehicle detection. Each pixel is identified firstly as either a background (road) or a foreground (vehicle) pixel by analyzing its gray-level temporal profile in a sequential way. Secondly, a vehicle is identified as a cluster of foreground pixels. The results of this new method are demonstrated on a test image-sequence featuring very congested traffic but also smoothly flowing traffic. It is shown that for both traffic situations the method is able to successfully detect low contrast, small size and low speed vehicles.Remote SensingAerospace Engineerin

    VEHICLE DETECTION FROM AN IMAGE SEQUENCE COLLECTED BY A HOVERING HELICOPTER

    No full text
    This paper addresses the problem of vehicle detection from an image sequence in difficult cases. Difficulties are notably caused by relatively small vehicles, vehicles that appear with low contrast or vehicles that drive at low speed. The image sequence considered here is recorded by a hovering helicopter and was stabilized prior to the vehicle detection step considered here. A practical algorithm is designed and implemented for this purpose of vehicle detection. Each pixel is identified firstly as either a background (road) or a foreground (vehicle) pixel by analyzing its gray-level temporal profile in a sequential way. Secondly, a vehicle is identified as a cluster of foreground pixels. The results of this new method are demonstrated on a test image-sequence featuring very congested traffic but also smoothly flowing traffic. It is shown that for both traffic situations the method is able to successfully detect low contrast, small size and low speed vehicles. 1 INTRODUCTION AND TEST DATA DESCRIPTION Traffic is a problem of all large cities and is continuously analyzed by both authorities and researchers. Driving behavior is the most influential element in traffic and still less is known about it. This is due to the lack of instruments to track many vehicles fo

    INFLUENCE OF DOMAIN SHIFT FACTORS ON DEEP SEGMENTATION OF THE DRIVABLE PATH OF AN AUTONOMOUS VEHICLE

    No full text
    One of the biggest challenges for an autonomous vehicle (and hence the WEpod) is to see the world as humans would see it. This understanding is the base for a successful and reliable future of autonomous vehicles. Real-world data and semantic segmentation generally are used to achieve full understanding of its surroundings. However, deploying a pretrained segmentation network to a new, previously unseen domain will not attain similar performance as it would on the domain where it is trained on due to the differences between the domains. Although research is done concerning the mitigation of this domain shift, the factors that cause these differences are not yet fully explored. We filled this gap with the investigation of several factors. A base network was created by a two-step finetuning procedure on a convolutional neural network (SegNet) which is pretrained on CityScapes (a dataset for semantic segmentation). The first tuning step is based on RobotCar (road scenery dataset recorded in Oxford, UK) while afterwards this network is fine-tuned for a second time but now on the KITTI (road scenery dataset recorded in Germany) dataset. With this base, experiments are used to obtain the importance of factors such as horizon line, colour and training order for a successful domain adaptation. In this case the domain adaptation is from the KITTI and RobotCar domain to the WEpod domain. For evaluation, groundtruth labels are created in a weakly-supervised setting. Negative influence was obtained for training on greyscale images instead of RGB images. This resulted in drops of IoU values up to 23.9 % for WEpod test images. The training order is a main contributor for domain adaptation with an increase in IoU of 4.7 %. This shows that the target domain (WEpod) is more closely related to RobotCar than to KITTI

    Outline extraction of motorway from helicopter image sequence

    No full text
    Behavior analysis of each driver during congestion is still a problem in microscopic traffic analysis. To solve this problem, image sequences of traffic data are collected by helicopter. After detection and tracking of the vehicles, detailed data for microscopic traffic analysis result. To increase speed and reliability of vehicle detection and tracking, road detection is suggested. In this way, the search area for vehicle detection is limited to road boundaries while other moving objects such as vegetation or pedestrians are omitted. Road detection from aerial helicopter images is done by a line detection method based on edge detection and morphology operations. Unwanted lines and other errors introduced during the line detection are removed by a threshold method. These thresholds are based on gray value and connected component labeling. After this improvement process, some parts of road lines are still missing, due to physical obstacles and the previous operations. With a Hough transform method line fragments are connected to each other in order to form elongated road lines. The image sequence data are taken by a Basler A101f digital camera installed on a helicopter. Results show that most road lines are correctly detected and connected but still some minor parts of the lines are not completely connected. Results can be improved by using a better technique for local maxima detection of the parameter space. The parameter space obviously shows number and positions of line that are completely correct are calculated
    corecore