426 research outputs found

    Monovision-based vehicle detection, distance and relative speed measurement in urban traffic

    Get PDF
    This study presents a monovision-based system for on-road vehicle detection and computation of distance and relative speed in urban traffic. Many works have dealt with monovision vehicle detection, but only a few of them provide the distance to the vehicle which is essential for the control of an intelligent transportation system. The system proposed integrates a single camera reducing the monetary cost of stereovision and RADAR-based technologies. The algorithm is divided in three major stages. For vehicle detection, the authors use a combination of two features: the shadow underneath the vehicle and horizontal edges. They propose a new method for shadow thresholding based on the grey-scale histogram assessment of a region of interest on the road. In the second and third stages, the vehicle hypothesis verification and the distance are obtained by means of its number plate whose dimensions and shape are standardised in each country. The analysis of consecutive frames is employed to calculate the relative speed of the vehicle detected. Experimental results showed excellent performance in both vehicle and number plate detections and in the distance measurement, in terms of accuracy and robustness in complex traffic scenarios and under different lighting conditions

    Vehicle Distance Detection Using Monocular Vision and Machine Learning

    Get PDF
    With the development of new cutting-edge technology, autonomous vehicles (AVs) have become the main topic in the majority of the automotive industries. For an AV to be safely used on the public roads it needs to be able to perceive its surrounding environment and calculate decisions within real-time. A perfect AV still does not exist for the majority of public use, but advanced driver assistance systems (ADAS) have been already integrated into everyday vehicles. It is predicted that these systems will evolve to work together to become a fully AV of the future. This thesis’ main focus is the combination of ADAS with artificial intelligence (AI) models. Since neural networks (NNs) could be unpredictable at many occasions, the main aspect of this thesis is the research of which neural network architecture will be most accurate in perceiving distance between vehicles. Hence, the study of integration of ADAS with AI, and studying whether AI can safely be used as a central processor for AV needs resolution. The created ADAS in this thesis mainly focuses on using monocular vision and machine training. A dataset of 200,000 images was used to train a neural network (NN) model, which accurately detect whether an image is a license plate or not by 96.75% accuracy. A sliding window reads whether a sub-section of an image is a license plate; the process achieved if it is, and the algorithm stores that sub-section image. The sub-images are run through a heatmap threshold to help minimize false detections. Upon detecting the license plate, the final algorithm determines the distance of the vehicle of the license plate detected. It then calculates the distance and outputs the data to the user. This process achieves results with up to a 1-meter distance accuracy. This ADAS has been aimed to be useable by the public, and easily integrated into future AV systems

    Multi-Object Tracking based Roadside Parking Behavior Recognition

    Get PDF
    Roadside parking spaces can alleviate the shortage of parking spaces, but there are some shortcomings to the charges for roadside parking. The popular charging methods at present mainly include manual charging, geomagnetic detection charging, meter charging, etc. These methods have certain limitations, such as high cost, difficult deployment, and low acceptance of people. To solve the shortcomings of roadside parking charges, this thesis proposes a scheme based on deep learning and image recognition. More specifically, the thesis proposes a scheme for detecting and tracking vehicles, recognizing license plates, recognizing vehicle parking behavior, and recording vehicle parking periods through the monocular camera to solve the problem of roadside parking charges. The scheme has the advantages of convenient deployment, low labor cost, high efficiency, and high accuracy. The main work of this thesis is as follows: 1. Based on the You Only Look Once (YOLO) algorithm, this thesis proposes a trapezoidal convolution algorithm to detect objects and improve the detection efficiency for the problem that the vehicle is far and small in the image. 2. Proposes a one-stage license plate recognition scheme based on YOLO, aiming to simplify the license plate recognition process. 3. Depending on the characteristics of the vehicle, this thesis proposes a feature extraction model of the vehicle, called the horizontal and vertical separation model, which use to combine with the deep Simple Online and Real-time Tracking (SORT) object tracking framework to track the vehicle and improve the tracking efficiency. 4. Uses a Long Short-Term Memory (LSTM) model to classify the behavior of the vehicle into three types: Park, leave, and no behavior. 5. Groups these modules together, and the engineering code is debugged a lot to realize a complete Roadside Parking Behavior Recognition (RPBR) system

    A method for vehicle count in the presence of multiple-vehicle occlusions in traffic images

    Get PDF
    This paper proposes a novel method for accurately counting the number of vehicles that are involved in multiple-vehicle occlusions, based on the resolvability of each occluded vehicle, as seen in a monocular traffic image sequence. Assuming that the occluded vehicles are segmented from the road background by a previously proposed vehicle segmentation method and that a deformable model is geometrically fitted onto the occluded vehicles, the proposed method first deduces the number of vertices per individual vehicle from the camera configuration. Second, a contour description model is utilized to describe the direction of the contour segments with respect to its vanishing points, from which individual contour description and vehicle count are determined. Third, it assigns a resolvability index to each occluded vehicle based on a resolvability model, from which each occluded vehicle model is resolved and the vehicle dimension is measured. The proposed method has been tested on 267 sets of real-world monocular traffic images containing 3074 vehicles with multiple-vehicle occlusions and is found to be 100% accurate in calculating vehicle count, in comparison with human inspection. By comparing the estimated dimensions of the resolved generalized deformable model of the vehicle with the actual dimensions published by the manufacturers, the root-mean-square error for width, length, and height estimations are found to be 48, 279, and 76 mm, respectively. © 2007 IEEE.published_or_final_versio

    Autonomous Grasping Using Novel Distance Estimator

    Get PDF
    This paper introduces a novel distance estimator using monocular vision for autonomous underwater grasping. The presented method is also applicable to topside grasping operations. The estimator is developed for robot manipulators with a monocular camera placed near the gripper. The fact that the camera is attached near the gripper makes it possible to design a method for capturing images from different positions, as the relative position change can be measured. The presented system can estimate relative distance to an object of unknown size with good precision. The manipulator applied in the presented work is the SeaArm-2, a fully electric underwater small modular manipulator. The manipulator is unique in its integrated monocular camera in the end-effector module, and its design facilitates the use of different end-effector tools. The camera is used for supervision, object detection, and tracking. The distance estimator was validated in a laboratory setting through autonomous grasping experiments. The manipulator was able to search for and find, estimate the relative distance of, grasp, and retrieve the relevant object in 12 out of 12 trials.publishedVersio

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    Cost-effective visual odometry system for vehicle motion control in agricultural environments

    Get PDF
    In precision agriculture, innovative cost-effective technologies and new improved solutions, aimed at making operations and processes more reliable, robust and economically viable, are still needed. In this context, robotics and automation play a crucial role, with particular reference to unmanned vehicles for crop monitoring and site-specific operations. However, unstructured and irregular working environments, such as agricultural scenarios, require specific solutions regarding positioning and motion control of autonomous vehicles. In this paper, a reliable and cost-effective monocular visual odometry system, properly calibrated for the localisation and navigation of tracked vehicles on agricultural terrains, is presented. The main contribution of this work is the design and implementation of an enhanced image processing algorithm, based on the cross-correlation approach. It was specifically developed to use a simplified hardware and a low complexity mechanical system, without compromising performance. By providing sub-pixel results, the presented algorithm allows to exploit low-resolution images, thus obtaining high accuracy in motion estimation with short computing time. The results, in terms of odometry accuracy and processing time, achieved during the in-field experimentation campaign on several terrains proved the effectiveness of the proposed method and its fitness for automatic control solutions in precision agriculture applications

    The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites

    Get PDF
    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account
    • 

    corecore