12 research outputs found

    Monitoreo automático de carreteras mediante el uso de un sistema de detección, seguimiento y extracción de características básicas de vehículos con técnicas de visión por computador

    Get PDF
    En este proyecto se pretende implementar un sistema que usa técnicas de visión por computador para detectar, seguir y extraer características básicas de vehículos que se encuentran circulando en carreteras o autopistas. Para probar el sistema implementado se prevé usar como banco de prueba el uso de videos que han sido generados en vías rápidas o carreteras a través del uso de cámaras. Los resultados obtenidos con el sistema propuesto ofrecen información básica de los vehículos, tales como: dirección de su movimiento, carril en el que se desplaza, tamaño de la caja englobante que lo contiene, entre otros. La información que se genera de este proyecto podría servir como base para ayudar a monitorear el flujo vial en carreteras o autopistas, lo cual posteriormente facilitará las labores de control vial de los servidores públicos y la empresa privada. No es arriesgado pensar que en un futuro cercano se podría reutilizar o adaptar el sistema para realizar seguimiento vehicular en zonas urbanas. Adicionalmente, podría también considerarse su uso total o parcial para proyectos similares en túneles, vigilancia en video, sistemas de llamadas de emergencia, sistemas de peaje, entre otras aplicaciones afines

    Monovision-based vehicle detection, distance and relative speed measurement in urban traffic

    Get PDF
    This study presents a monovision-based system for on-road vehicle detection and computation of distance and relative speed in urban traffic. Many works have dealt with monovision vehicle detection, but only a few of them provide the distance to the vehicle which is essential for the control of an intelligent transportation system. The system proposed integrates a single camera reducing the monetary cost of stereovision and RADAR-based technologies. The algorithm is divided in three major stages. For vehicle detection, the authors use a combination of two features: the shadow underneath the vehicle and horizontal edges. They propose a new method for shadow thresholding based on the grey-scale histogram assessment of a region of interest on the road. In the second and third stages, the vehicle hypothesis verification and the distance are obtained by means of its number plate whose dimensions and shape are standardised in each country. The analysis of consecutive frames is employed to calculate the relative speed of the vehicle detected. Experimental results showed excellent performance in both vehicle and number plate detections and in the distance measurement, in terms of accuracy and robustness in complex traffic scenarios and under different lighting conditions

    Video based vehicle detection for advance warning Intelligent Transportation System

    Full text link
    Video based vehicle detection and surveillance technologies are an integral part of Intelligent Transportation System (ITS), due to its non-intrusiveness and capability or capturing global and specific vehicle behavior data. The initial goal of this thesis is to develop an efficient advance warning ITS system for detection of congestion at work zones and special events based on video detection. The goals accomplished by this thesis are: (1) successfully developed the advance warning ITS system using off-the-shelf components and, (2) Develop and evaluate an improved vehicle detection and tracking algorithm. The advance warning ITS system developed includes many off-the-shelf equipments like Autoscope (video based vehicle detector), Digital Video Recorders, RF transceivers, high gain Yagi antennas, variable message signs and interface processors. The video based detection system used requires calibration and fine tuning of configuration parameters for accurate results. Therefore, an in-house video based vehicle detection system was developed using the Corner Harris algorithm to eliminate the need of complex calibration and contrasts modifications. The algorithm was implemented using OpenCV library on a Arcom\u27s Olympus Windows XP Embedded development kit running WinXPE operating system. The algorithm performance is for accuracy in vehicle speed and count is evaluated. The performance of the proposed algorithm is equivalent or better to the Autoscope system without any modifications to calibration and lamination adjustments

    GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection

    Full text link

    Visão computorizada em condução autónoma

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaO Icaro é um veículo desenvolvido na Universidade de Aveiro, resultante de um trabalho pluridisciplinar que integra várias áreas tais como informática, electrónica e mecânica. O desenvolvimento deste veículo iniciou-se em 1997, com o objectivo de participação na prova internacional Shell Eco-Marathon. Neste momento, está em desenvolvimento uma nova plataforma, de nome Hammershark, com o objectivo de criar um veículo inovador, tanto em termos de construção como de capacidades finais, de forma a superar o Icaro. Uma das características mais inovadoras que se espera alcançar é a capacidade de condução autónoma, tendo como apoio a visão computorizada e outros métodos de percepção sensorial. Esta dissertação aborda a temática da visão computorizada para a condução autónoma, sendo um trabalho que descreve de forma geral as técnicas normalmente utilizadas para atingir este fim. Assim, o trabalho realizado não se destina especificamente a ser utilizado nas provas da Shell Eco-Marathon, mas sim em todo o tipo de ambientes. O objectivo é elaborar um estudo exploratório e de síntese de conhecimentos sobre a temática, propondo soluções sempre que possível. As tarefas principais de um sistema que utilize visão computorizada para condução autónoma podem dividir-se em duas categorias principais: detecção dos limites laterais da faixa de rodagem e detecção de obstáculos. A estas duas tarefas junta-se também uma auxiliar que se resume pela identificação da zona que define a própria da faixa de rodagem, de forma a facilitar a detecção dos seus limites e dos obstáculos que lá circulem. O trabalho realizado divide-se então nestas 3 categorias, onde são estudadas várias metodologias, propondo possíveis soluções que consigam alcançar os objectivos, quando possível. ABSTRACT: Icaro is a vehicle developed at the University of Aveiro, resulting from a multidisciplinary work that integrates ranging areas such as informatics, electronics and mechanics. The development of this vehicle started in 1997 with the objective of participation in the international race Shell Eco-Marathon. At this moment, a new platform is being developed, Hammershark, in order to create an innovator vehicle, both in terms of building techniques and final capabilities, in order to overcome Icaro. One of the most innovating features that it is expected to achieve is autonomous driving, with the support of computer vision among other methods of sensory perception. This thesis addresses the topic of computer vision for autonomous driving, discribing in general the techniques commonly used to achieve this goal. Thus, the work is not intended specifically for use in the Shell Eco-Marathon, but in all types of environments. The aim is to develop an exploratory study and synthesis of knowledge on the subject, proposing solutions whenever possible. The main tasks of an autonomous driving system based on computer vision can be divided into two main categories: detecting track side limits and obstacles. To perform these two tasks is also necessary an auxiliary one, that is identify the area that defines the track in order to facilitate the detection of its boundaries and the obstacles which are located on it. This work is then divided into these 3 major categories, where various methodologies are studied, proposing, whenever possible, solutions that can achieve the goals

    A Robust Object Detection System for Driverless Vehicles through Sensor Fusion and Artificial Intelligence Techniques

    Get PDF
    Since the early 1990s, various research domains have been concerned with the concept of autonomous driving, leading to the widespread implementation of numerous advanced driver assistance features. However, fully automated vehicles have not yet been introduced to the market. The process of autonomous driving can be outlined through the following stages: environment perception, ego-vehicle localization, trajectory estimation, path planning, and vehicle control. Environment perception is partially based on computer vision algorithms that can detect and track surrounding objects. The process of objects detection performed by autonomous vehicles is considered challenging for several reasons, such as the presence of multiple dynamic objects in the same scene, interaction between objects, real-time speed requirements, and the presence of diverse weather conditions (e.g., rain, snow, fog, etc.). Although many studies have been conducted on objects detection performed by autonomous vehicles, it remains a challenging task, and improving the performance of object detection in diverse driving scenes is an ongoing field. This thesis aims to develop novel methods for the detection and 3D localization of surrounding dynamic objects in driving scenes in different rainy weather conditions. In this thesis, firstly, owing to the frequent occurrence of rain and its negative effect on the performance of objects detection operation, a real-time lightweight deraining network is proposed; it works on single real-time images separately. Rain streaks and the accumulation of rain streaks introduce distinct visual degradation effects to captured images. The proposed deraining network effectively removes both rain streaks and accumulated rain streaks from images. It makes use of the progressive operation of two main stages: rain streaks removal and rain streaks accumulation removal. The rain streaks removal stage is based on a Residual Network (ResNet) to maintain real-time performance and avoid adding to the computational complexity. Furthermore, the application of recursive computations involves the sharing of network parameters. Meanwhile, distant rain streaks accumulate and induce a distortion similar to fogging. Thus, it could be mitigated in a way similar to defogging. This stage relies on a transmission-guided lightweight network (TGL-Net). The proposed deraining network was evaluated on five datasets having synthetic rain of different properties and two other datasets with real rainy scenes. Secondly, an emphasis has been put on proposing a novel sensory system that achieves realtime multiple dynamic objects detection in driving scenes. The proposed sensory system utilizes a monocular camera and a 2D Light Detection and Ranging (LiDAR) sensor in a complementary fusion approach. YOLOv3- a baseline real-time object detection algorithm has been used to detect and classify objects in images captured by the camera; detected objects are surrounded by bounding boxes to localize them within the frames. Since objects present in a driving scene are dynamic and usually occluding each other, an algorithm has been developed to differentiate objects whose bounding boxes are overlapping. Moreover, the locations of bounding boxes within frames (in pixels) are converted into real-world angular coordinates. A 2D LiDAR was used to obtain depth measurements while maintaining low computational requirements in order to save resources for other autonomous driving related operations. A novel technique has been developed and tested for processing and mapping 2D LiDAR measurements with corresponding bounding boxes. The detection accuracy of the proposed system was manually evaluated in different real-time scenarios. Finally, the effectiveness of the proposed deraining network was validated in terms of its impact on objects detection in the context of de-rained images. Results of the proposed deraining network were compared to existing baseline deraining networks and have shown that the running time of the proposed network is 2.23× faster than the average running time of baseline deraining networks while achieving 1.2× improvement when tested on different synthetic datasets. Moreover, tests on the LiDAR measurements showed an average error of ±0.04m in real driving scenes. Also, both deraining and objects detection are jointly tested, and it was demonstrated that performing deraining ahead of objects detection caused 1.45× enhancement in the object detection precision

    Dichte Objektsegmentierung in Stereobildfolgen

    Get PDF
    Die in dieser Arbeit vorgestellte Szenensegmentierung zerlegt eine Stereobildsequenz in eine Menge von Bildbereichen, die vom Menschen eindeutig als unabhängig bewegte Verkehrsobjekte interpretierbar sind. Eine Besonderheit des Verfahrens ist die Tatsache, dass die einzelnen Teilaufgaben der dreidimensionalen Rekonstruktion, Bewegungsschätzung und Segmentierung dabei in einem gemeinsamen Modell beschrieben und in verzahnter Reihenfolge gelöst werden

    Verificación de vehículos mediante técnicas de visión artificial

    Full text link
    En este trabajo, se proponen sistemas de verificación de vehículos mediante métodos basados en aprendizaje. En primer lugar se realiza un estudio del estado del arte para conocer los problemas actuales en la materia. Después, se muestra la arquitectura de los sistemas que se divide en dos etapas: extracción de características y clasificación. En la primera etapa se realiza una breve exposición de los tipos de características que se van a implementar (simetría, bordes, análisis de componentes principales (PCA) e histogramas de gradientes orientados (HOG)). La etapa de clasificación consiste en una explicación teórica de los clasificadores utilizados en nuestro sistema. Posteriormente, se realiza el desarrollo de estos sistemas, efectuando mejoras para cada uno de ellos. Para el sistema basado en simetría se plantean dos métodos diferentes, introduciéndose una mejora en el segundo método, que consiste en una diferenciación entre ejes compuestos por uno y dos píxeles, junto con una penalización en los valores de simetría para conseguir una mayor diferenciación entre las clases. Respecto al sistema basado en bordes, se utilizan únicamente bordes verticales, donde se analiza el uso de vectores reducidos. Por otra parte, se presenta el uso de la matriz de correlaciones para desarrollar el sistema basado en PCA. En el sistema basado en HOG se estudia qué parámetros son los adecuados para el descriptor en el caso particular de vehículos, proponiéndose descriptores eficientes basados en esta configuración, que pueden ser implementados en sistemas en tiempo real. Finalmente, con los resultados obtenidos en el paso previo se procede a un análisis para los distintos métodos presentando sus principales características y limitaciones.In this work, a vehicle verification systems using learning methods are proposed. First, a study of related work has been done. Afterwards, the arquitecture of these systems is explained. The arquitecure is divided in two stages: feature extraction and clasification. In the first stage, a brief summary of the different features that will be implemented (simmetry, edges, principal components analysis (PCA) and histograms of oriented gradients (HOG)) is given. The second stage is a theoretical explanation of the classifiers used in this system. Subsequently, the systems are developed with new improvements. Two different methods are proposed for the system based on symmetry. An improvement is introduced for the second method that is a differentiation between compounds axes by one and two pixels, also a penalty is introduced into the values of symmetry for greater differentiation between classes. Regarding the system based on edges, vertical edges are used, where the performance reducing the size of the vectors is analyzed. Moreover, the correlation matrix is used to develop the system based on PCA. In the system based on HOG, in the particular case of vehicles, appropiate parameters for the descriptor are studied, proposing efficient descriptors based on this configuration that can be implemented in real-time systems. Finally, the results obtained in the previous step are analyzed for each of the methods, and their main characteristics and limitations are described
    corecore