879 research outputs found

    Vehicle-component identification based on multiscale textural couriers

    Get PDF
    This paper presents a novel method for identifying vehicle components in a monocular traffic image sequence. In the proposed method, the vehicles are first divided into multiscale regions based on the center of gravity of the foreground vehicle mask and the calibrated-camera parameters. With these multiscale regions, textural couriers are generated based on the localized variances of the foreground vehicle image. A new scale-space model is subsequently created based on the textural couriers to provide a topological structure of the vehicle. In this model, key feature points of the vehicle can significantly be described based on the topological structure to determine the regions that are homogenous in texture from which vehicle components can be identified by segmenting the key feature points. Since no motion information is required in order to segment the vehicles prior to recognition, the proposed system can be used in situations where extensive observation time is not available or motion information is unreliable. This novel method can be used in real-world systems such as vehicle-shape reconstruction, vehicle classification, and vehicle recognition. This method was demonstrated and tested on 200 different vehicle samples captured in routine outdoor traffic images and achieved an average error rate of 6.8% with a variety of vehicles and traffic scenes. © 2006 IEEE.published_or_final_versio

    Robot navigation in vineyards based on the visual vanish point concept

    Get PDF
    One of the biggest challenges of autonomous navigation in robots for agriculture is the path following in a large dimension map and various terrains. An important ability is to follow corridors and or vine rows which are frequent situation and with some complexity given the outline of real vegetation. One method to locate and guide the robot in between vineyards is making use of vanishing point detection on vine rows in order to obtain a reference point and send the adequate velocity commands to the motors. This detection will be conceived utilizing convectional image processing algorithms and Deep Learning techniques. It will be necessary to adapt the image processing algorithms or Deep Learning for use in ROS 2 context.One of the biggest challenges of autonomous navigation in robots for agriculture is the path following in a large dimension map and various terrains. An important ability is to follow corridors and or vine rows which are frequent situation and with some complexity given the outline of real vegetation. One method to locate and guide the robot in between vineyards is making use of vanishing point detection on vine rows in order to obtain a reference point and send the adequate velocity commands to the motors. This detection will be conceived utilizing convectional image processing algorithms and Deep Learning techniques. It will be necessary to adapt the image processing algorithms or Deep Learning for use in ROS 2 context

    A method for vehicle count in the presence of multiple-vehicle occlusions in traffic images

    Get PDF
    This paper proposes a novel method for accurately counting the number of vehicles that are involved in multiple-vehicle occlusions, based on the resolvability of each occluded vehicle, as seen in a monocular traffic image sequence. Assuming that the occluded vehicles are segmented from the road background by a previously proposed vehicle segmentation method and that a deformable model is geometrically fitted onto the occluded vehicles, the proposed method first deduces the number of vertices per individual vehicle from the camera configuration. Second, a contour description model is utilized to describe the direction of the contour segments with respect to its vanishing points, from which individual contour description and vehicle count are determined. Third, it assigns a resolvability index to each occluded vehicle based on a resolvability model, from which each occluded vehicle model is resolved and the vehicle dimension is measured. The proposed method has been tested on 267 sets of real-world monocular traffic images containing 3074 vehicles with multiple-vehicle occlusions and is found to be 100% accurate in calculating vehicle count, in comparison with human inspection. By comparing the estimated dimensions of the resolved generalized deformable model of the vehicle with the actual dimensions published by the manufacturers, the root-mean-square error for width, length, and height estimations are found to be 48, 279, and 76 mm, respectively. © 2007 IEEE.published_or_final_versio

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    Three-dimensional model-based human detection in crowded scenes

    Get PDF
    In this paper, the problem of human detection in crowded scenes is formulated as a maximum a posteriori problem, in which, given a set of candidates, predefined 3-D human shape models are matched with image evidence, provided by foreground extraction and probability of boundary, to estimate the human configuration. The optimal solution is obtained by decomposing the mutually related candidates into unoccluded and occluded ones in each iteration according to a graph description of the candidate relations and then only matching models for the unoccluded candidates. A candidate validation and rejection process based on minimum description length and local occlusion reasoning is carried out after each iteration of model matching. The advantage of the proposed optimization procedure is that its computational cost is much smaller than that of global optimization methods, while its performance is comparable to them. The proposed method achieves a detection rate of about 2% higher on a subset of images of the Caviar data set than the best result reported by previous works. We also demonstrate the performance of the proposed method using another challenging data set. © 2011 IEEE.published_or_final_versio

    Automatic Pipeline Surveillance Air-Vehicle

    Get PDF
    This thesis presents the developments of a vision-based system for aerial pipeline Right-of-Way surveillance using optical/Infrared sensors mounted on Unmanned Aerial Vehicles (UAV). The aim of research is to develop a highly automated, on-board system for detecting and following the pipelines; while simultaneously detecting any third-party interference. The proposed approach of using a UAV platform could potentially reduce the cost of monitoring and surveying pipelines when compared to manned aircraft. The main contributions of this thesis are the development of the image-analysis algorithms, the overall system architecture and validation of in hardware based on scaled down Test environment. To evaluate the performance of the system, the algorithms were coded using Python programming language. A small-scale test-rig of the pipeline structure, as well as expected third-party interference, was setup to simulate the operational environment and capture/record data for the algorithm testing and validation. The pipeline endpoints are identified by transforming the 16-bits depth data of the explored environment into 3D point clouds world coordinates. Then, using the Random Sample Consensus (RANSAC) approach, the foreground and background are separated based on the transformed 3D point cloud to extract the plane that corresponds to the ground. Simultaneously, the boundaries of the explored environment are detected based on the 16-bit depth data using a canny detector. Following that, these boundaries were filtered out, after being transformed into a 3D point cloud, based on the real height of the pipeline for fast and accurate measurements using a Euclidean distance of each boundary point, relative to the plane of the ground extracted previously. The filtered boundaries were used to detect the straight lines of the object boundary (Hough lines), once transformed into 16-bit depth data, using a Hough transform method. The pipeline is verified by estimating a centre line segment, using a 3D point cloud of each pair of the Hough line segments, (transformed into 3D). Then, the corresponding linearity of the pipeline points cloud is filtered within the width of the pipeline using Euclidean distance in the foreground point cloud. Then, the segment length of the detected centre line is enhanced to match the exact pipeline segment by extending it along the filtered point cloud of the pipeline. The third-party interference is detected based on four parameters, namely: foreground depth data; pipeline depth data; pipeline endpoints location in the 3D point cloud; and Right-of-Way distance. The techniques include detection, classification, and localization algorithms. Finally, a waypoints-based navigation system was implemented for the air- vehicle to fly over the course waypoints that were generated online by a heading angle demand to follow the pipeline structure in real-time based on the online identification of the pipeline endpoints relative to a camera frame
    corecore