3 research outputs found

    Desarrollo de un sistema para la interpretaci贸n y predicci贸n de la situaci贸n del tr谩fico mediante Deep Learning

    Get PDF
    La comprensi贸n sem谩ntica de una escena es un aspecto clave en m煤ltiples aplicaciones de inteligencia artificial, tanto para los Sistemas Inteligentes de Transporte como para los robots. En este Trabajo Fin de M谩ster se dise帽a, desarrolla y eval煤a un sistema que, basado en la segmentaci贸n sem谩ntica de im谩genes, obtenida mediante una red neuronal convolucional, permite realizar las distintas tareas que abarca la comprensi贸n de una escena: clasificaci贸n, detecci贸n de objetos y la propia segmentaci贸n sem谩ntica, de una manera sencilla y eficiente. Adem谩s, proponemos una soluci贸n enfocada a veh铆culos inteligentes, que permite, utilizando la segmentaci贸n sem谩ntica, estimar la velocidad a la que debe circular el veh铆culo. Para ello, hemos construido una nueva base de datos en la que poder evaluar este nuevo problema. Los resultados confirman que es posible y beneficioso confiar en la segmentaci贸n sem谩ntica para llevar a cabo las distintas tareas.Semantic scene understanding is a key aspect of multiple artificial intelligence applications, from Intelligent Transportation Systems to robotics. In this Final Project, we design, develop and evaluate a system that, based on the semantic segmentation of images obtained through a convolutional neural network, allows to carry out the different tasks comprising scene understanding: classification, object detection and the aforementioned semantic segmentation, in a simple yet efficient manner. In addition, we propose a solution focused on intelligent vehicles, which allows us, using semantic segmentation, to estimate the speed at which the vehicle must be driven. To this end, we have built a new database in which we can evaluate this challenging new problem. The results confirm that it is possible and beneficial to rely on semantic segmentation to successfully perform the different tasks.M谩ster Universitario en Ingenier铆a de Telecomunicaci贸n (M125

    Boosting Multi-Vehicle Tracking with a Joint Object Detection and Viewpoint Estimation Sensor

    No full text
    In this work, we address the problem of multi-vehicle detection and tracking for traffic monitoring applications. We preset a novel intelligent visual sensor for tracking-by-detection with simultaneous pose estimation. Essentially, we adapt an Extended Kalman Filter (EKF) to work not only with the detections of the vehicles but also with their estimated coarse viewpoints, directly obtained with the vision sensor. We show that enhancing the tracking with observations of the vehicle pose, results in a better estimation of the vehicles trajectories. For the simultaneous object detection and viewpoint estimation task, we present and evaluate two independent solutions. One is based on a fast GPU implementation of a Histogram of Oriented Gradients (HOG) detector with Support Vector Machines (SVMs). For the second, we adequately modify and train the Faster R-CNN deep learning model, in order to recover from it not only the object localization but also an estimation of its pose. Finally, we publicly release a challenging dataset, the GRAM Road Traffic Monitoring (GRAM-RTM), which has been especially designed for evaluating multi-vehicle tracking approaches within the context of traffic monitoring applications. It comprises more than 700 unique vehicles annotated across more than 40.300 frames of three videos. We expect the GRAM-RTM becomes a benchmark in vehicle detection and tracking, providing the computer vision and intelligent transportation systems communities with a standard set of images, annotations and evaluation procedures for multi-vehicle tracking. We present a thorough experimental evaluation of our approaches with the GRAM-RTM, which will be useful for establishing further comparisons. The results obtained confirm that the simultaneous integration of vehicle localizations and pose estimations as observations in an EKF, improves the tracking results
    corecore