233,140 research outputs found

    Vision-Based Control of a Full-Size Car by Lane Detection

    Get PDF
    Autonomous driving is an area of increasing investment for researchers and auto manufacturers. Integration has already begun for self-driving cars in urban environments. An essential aspect of navigation in these areas is the ability to sense and follow lane markers. This thesis focuses on the development of a vision-based control platform using lane detection to control a full-sized electric vehicle with only a monocular camera. An open-source, integrated solution is presented for automation of a stock vehicle. Aspects of reverse engineering, system identification, and low-level control of the vehicle are discussed. This work also details methods for lane detection and the design of a non-linear vision-based control strategy

    Computer Vision Based Structural Identification Framework for Bridge Health Mornitoring

    Get PDF
    The objective of this dissertation is to develop a comprehensive Structural Identification (St-Id) framework with damage for bridge type structures by using cameras and computer vision technologies. The traditional St-Id frameworks rely on using conventional sensors. In this study, the collected input and output data employed in the St-Id system are acquired by series of vision-based measurements. The following novelties are proposed, developed and demonstrated in this project: a) vehicle load (input) modeling using computer vision, b) bridge response (output) using full non-contact approach using video/image processing, c) image-based structural identification using input-output measurements and new damage indicators. The input (loading) data due vehicles such as vehicle weights and vehicle locations on the bridges, are estimated by employing computer vision algorithms (detection, classification, and localization of objects) based on the video images of vehicles. Meanwhile, the output data as structural displacements are also obtained by defining and tracking image key-points of measurement locations. Subsequently, the input and output data sets are analyzed to construct novel types of damage indicators, named Unit Influence Surface (UIS). Finally, the new damage detection and localization framework is introduced that does not require a network of sensors, but much less number of sensors. The main research significance is the first time development of algorithms that transform the measured video images into a form that is highly damage-sensitive/change-sensitive for bridge assessment within the context of Structural Identification with input and output characterization. The study exploits the unique attributes of computer vision systems, where the signal is continuous in space. This requires new adaptations and transformations that can handle computer vision data/signals for structural engineering applications. This research will significantly advance current sensor-based structural health monitoring with computer-vision techniques, leading to practical applications for damage detection of complex structures with a novel approach. By using computer vision algorithms and cameras as special sensors for structural health monitoring, this study proposes an advance approach in bridge monitoring through which certain type of data that could not be collected by conventional sensors such as vehicle loads and location, can be obtained practically and accurately

    Vision-based vehicle detection and tracking in intelligent transportation system

    Get PDF
    This thesis aims to realize vision-based vehicle detection and tracking in the Intelligent Transportation System. First, it introduces the methods for vehicle detection and tracking. Next, it establishes the sensor fusion framework of the system, including dynamic model and sensor model. Then, it simulates the traffic scene at a crossroad by a driving simulator, where the research target is one single car, and the traffic scene is ideal. YOLO Neural Network is applied to the image sequence for vehicle detection. Kalman filter method, extended Kalman filter method, and particle filter method are utilized and compared for vehicle tracking. The Following part is the practical experiment where there are multiple vehicles at the same time, and the traffic scene is in real life with various interference factors. YOLO Neural Network combined with OpenCV is adopted to realize real-time vehicle detection. Kalman filter and extended Kalman filter are applied for vehicle tracking; an identification algorithm is proposed to solve the occlusion of the vehicles. The effects of process noise as well as measurement noise are analysed using variable-controlling approach. Additionally, perspective transformation is illustrated and implemented to transfer the coordinates from the image plane to the ground plane. If the vision-based vehicle detection and tracking can be realized and popularized in daily lives, vehicle information can be shared among infrastructures, vehicles, and users, so as to build interactions inside the Intelligent Transportation System

    A Vision Based Lane Marking Detection, Tracking and Vehicle Detection on Highways

    Get PDF
    Changing street conditions is an important issue in the applications in mechanized route of vehicles essentially because of vast change in appearance in lane markings on by variables such substantial movement and changing daylight conditions of the specific time of day. A path identification framework is an imperative segment of numerous computerized vehicle frameworks. In this paper, we address these issues through lane identification and vehicle recognition calculation to manage testing situations, for example, a lane end and flow, old lane markings, and path changes. Left and right lane limits will be distinguished independently to adequately handle blending and part paths utilizing a strong calculation. Vehicle discovery is another issue in computerized route of vehicles. Different vehicle discovery approaches have been actualized yet it is hard to locate a quick and trusty calculation for applications, for example, for vehicle crashing (hitting) cautioning or path evolving system .Vision-based vehicle recognition can likewise enhance the crash cautioning execution when it is consolidated with a lane marking identification calculation. In crash cautioning applications, it is vital to know whether the obstruction is in the same path with the sense of self vehicle or not

    Complete Solution for Vehicle Re-ID in Surround-view Camera System

    Full text link
    Vehicle re-identification (Re-ID) is a critical component of the autonomous driving perception system, and research in this area has accelerated in recent years. However, there is yet no perfect solution to the vehicle re-identification issue associated with the car's surround-view camera system. Our analysis identifies two significant issues in the aforementioned scenario: i) It is difficult to identify the same vehicle in many picture frames due to the unique construction of the fisheye camera. ii) The appearance of the same vehicle when seen via the surround vision system's several cameras is rather different. To overcome these issues, we suggest an integrative vehicle Re-ID solution method. On the one hand, we provide a technique for determining the consistency of the tracking box drift with respect to the target. On the other hand, we combine a Re-ID network based on the attention mechanism with spatial limitations to increase performance in situations involving multiple cameras. Finally, our approach combines state-of-the-art accuracy with real-time performance. We will soon make the source code and annotated fisheye dataset available.Comment: 11 pages, 10 figures. arXiv admin note: substantial text overlap with arXiv:2006.1650

    Machine Learning based Vehicle Counting and Detection System

    Get PDF
    The study of how machines perceive instead of humans is known as vehicle detection or computer vision object identification. The primary purpose of a vehicle detection system is to identify one or multiple vehicles within the input images and live video feed. The dataset is used to train image processing algorithms for tasks like detection and tracking. To pinpoint the defects and strength of each image processing system, assessment criteria are used to develop, train, test, and compare them. To recognize, track, and count the vehicle in images and videos, the image processing algorithms such as CNN YOLOv3 and SVM are implemented. The main goal and intention of this work is to develop a system that can intelligently identify and track automobiles in still images and moving movies. The results demonstrated that CNN-based YOLOv3 does a  good job of detecting and tracking vehicles.   &nbsp

    Sensor fusion methodology for vehicle detection

    Get PDF
    A novel sensor fusion methodology is presented, which provides intelligent vehicles with augmented environment information and knowledge, enabled by vision-based system, laser sensor and global positioning system. The presented approach achieves safer roads by data fusion techniques, especially in single-lane carriage-ways where casualties are higher than in other road classes, and focuses on the interplay between vehicle drivers and intelligent vehicles. The system is based on the reliability of laser scanner for obstacle detection, the use of camera based identification techniques and advanced tracking and data association algorithms i.e. Unscented Kalman Filter and Joint Probabilistic Data Association. The achieved results foster the implementation of the sensor fusion methodology in forthcoming Intelligent Transportation Systems

    Automatic expert system based on images for accuracy crop row detection in maize fields

    Get PDF
    This paper proposes an automatic expert system for accuracy crop row detection in maize fields based on images acquired from a vision system. Different applications in maize, particularly those based on site specific treatments, require the identification of the crop rows. The vision system is designed with a defined geometry and installed onboard a mobile agricultural vehicle, i.e. submitted to vibrations, gyros or uncontrolled movements. Crop rows can be estimated by applying geometrical parameters under image perspective projection. Because of the above undesired effects, most often, the estimation results inaccurate as compared to the real crop rows. The proposed expert system exploits the human knowledge which is mapped into two modules based on image processing techniques. The first one is intended for separating green plants (crops and weeds) from the rest (soil, stones and others). The second one is based on the system geometry where the expected crop lines are mapped onto the image and then a correction is applied through the well-tested and robust Theil–Sen estimator in order to adjust them to the real ones. Its performance is favorably compared against the classical Pearson product–moment correlation coefficient

    Aerial Manipulation Using a Novel Unmanned Aerial Vehicle Cyber-Physical System

    Full text link
    Unmanned Aerial Vehicles(UAVs) are attaining more and more maneuverability and sensory ability as a promising teleoperation platform for intelligent interaction with the environments. This work presents a novel 5-degree-of-freedom (DoF) unmanned aerial vehicle (UAV) cyber-physical system for aerial manipulation. This UAV's body is capable of exerting powerful propulsion force in the longitudinal direction, decoupling the translational dynamics and the rotational dynamics on the longitudinal plane. A high-level impedance control law is proposed to drive the vehicle for trajectory tracking and interaction with the environments. In addition, a vision-based real-time target identification and tracking method integrating a YOLO v3 real-time object detector with feature tracking, and morphological operations is proposed to be implemented onboard the vehicle with support of model compression techniques to eliminate latency caused by video wireless transmission and heavy computation burden on traditional teleoperation platforms.Comment: Newsletter of IEEE Technical Committee on Cyber-Physical System
    • 

    corecore