7 research outputs found

    Optimized HOG for on-road video based vehicle verification

    Get PDF
    Vision-based object detection from a moving platform becomes particularly challenging in the field of advanced driver assistance systems (ADAS). In this context, onboard vision-based vehicle verification strategies become critical, facing challenges derived from the variability of vehicles appearance, illumination, and vehicle speed. In this paper, an optimized HOG configuration for onboard vehicle verification is proposed which not only considers its spatial and orientation resolution, but descriptor processing strategies and classification. An in-depth analysis of the optimal settings for HOG for onboard vehicle verification is presented, in the context of SVM classification with different kernels. In contrast to many existing approaches, the evaluation is realized in a public and heterogeneous database of vehicle and non-vehicle images in different areas of the road, rendering excellent verification rates that outperform other similar approaches in the literature

    Nighttime Driver Behavior Prediction Using Taillight Signal Recognition via CNN-SVM Classifier

    Full text link
    This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles. The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road. At the beginning of the detector, a learnable pre-processing block is implemented, which extracts deep features from input images and calculates the data rarity for each feature. In the next step, drawing inspiration from soft attention, a weighted binary mask is designed that guides the model to focus more on predetermined regions. This research utilizes Convolutional Neural Networks (CNNs) to extract distinguishing characteristics from these areas, then reduces dimensions using Principal Component Analysis (PCA). Finally, the Support Vector Machine (SVM) is used to predict the behavior of the vehicles. To train and evaluate the model, a large-scale dataset is collected from two types of dash-cams and Insta360 cameras from the rear view of Ford Motor Company vehicles. This dataset includes over 12k frames captured during both daytime and nighttime hours. To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images. The findings from the experiments demonstrate that the proposed methodology can accurately categorize vehicle behavior with 92.14% accuracy, 97.38% specificity, 92.09% sensitivity, 92.10% F1-measure, and 0.895 Cohen's Kappa Statistic. Further details are available at https://github.com/DeepCar/Taillight_Recognition.Comment: 12 pages, 10 figure

    Emerging research directions in computer science : contributions from the young informatics faculty in Karlsruhe

    Get PDF
    In order to build better human-friendly human-computer interfaces, such interfaces need to be enabled with capabilities to perceive the user, his location, identity, activities and in particular his interaction with others and the machine. Only with these perception capabilities can smart systems ( for example human-friendly robots or smart environments) become posssible. In my research I\u27m thus focusing on the development of novel techniques for the visual perception of humans and their activities, in order to facilitate perceptive multimodal interfaces, humanoid robots and smart environments. My work includes research on person tracking, person identication, recognition of pointing gestures, estimation of head orientation and focus of attention, as well as audio-visual scene and activity analysis. Application areas are humanfriendly humanoid robots, smart environments, content-based image and video analysis, as well as safety- and security-related applications. This article gives a brief overview of my ongoing research activities in these areas

    Real-time vehicle detection using low-cost sensors

    Get PDF
    Improving road safety and reducing the number of accidents is one of the top priorities for the automotive industry. As human driving behaviour is one of the top causation factors of road accidents, research is working towards removing control from the human driver by automating functions and finally introducing a fully Autonomous Vehicle (AV). A Collision Avoidance System (CAS) is one of the key safety systems for an AV, as it ensures all potential threats ahead of the vehicle are identified and appropriate action is taken. This research focuses on the task of vehicle detection, which is the base of a CAS, and attempts to produce an effective vehicle detector based on the data coming from a low-cost monocular camera. Developing a robust CAS based on low-cost sensor is crucial to bringing the cost of safety systems down and in this way, increase their adoption rate by end users. In this work, detectors are developed based on the two main approaches to vehicle detection using a monocular camera. The first is the traditional image processing approach where visual cues are utilised to generate potential vehicle locations and at a second stage, verify the existence of vehicles in an image. The second approach is based on a Convolutional Neural Network, a computationally expensive method that unifies the detection process in a single pipeline. The goal is to determine which method is more appropriate for real-time applications. Following the first approach, a vehicle detector based on the combination of HOG features and SVM classification is developed. The detector attempts to optimise performance by modifying the detection pipeline and improve run-time performance. For the CNN-based approach, six different network models are developed and trained end to end using collected data, each with a different network structure and parameters, in an attempt to determine which combination produces the best results. The evaluation of the different vehicle detectors produced some interesting findings; the first approach did not manage to produce a working detector, while the CNN-based approach produced a high performing vehicle detector with an 85.87% average precision and a very low miss rate. The detector managed to perform well under different operational environments (motorway, urban and rural roads) and the results were validated using an external dataset. Additional testing of the vehicle detector indicated it is suitable as a base for safety applications such as CAS, with a run time performance of 12FPS and potential for further improvements.</div

    Verificación de vehículos mediante técnicas de visión artificial

    Full text link
    En este trabajo, se proponen sistemas de verificación de vehículos mediante métodos basados en aprendizaje. En primer lugar se realiza un estudio del estado del arte para conocer los problemas actuales en la materia. Después, se muestra la arquitectura de los sistemas que se divide en dos etapas: extracción de características y clasificación. En la primera etapa se realiza una breve exposición de los tipos de características que se van a implementar (simetría, bordes, análisis de componentes principales (PCA) e histogramas de gradientes orientados (HOG)). La etapa de clasificación consiste en una explicación teórica de los clasificadores utilizados en nuestro sistema. Posteriormente, se realiza el desarrollo de estos sistemas, efectuando mejoras para cada uno de ellos. Para el sistema basado en simetría se plantean dos métodos diferentes, introduciéndose una mejora en el segundo método, que consiste en una diferenciación entre ejes compuestos por uno y dos píxeles, junto con una penalización en los valores de simetría para conseguir una mayor diferenciación entre las clases. Respecto al sistema basado en bordes, se utilizan únicamente bordes verticales, donde se analiza el uso de vectores reducidos. Por otra parte, se presenta el uso de la matriz de correlaciones para desarrollar el sistema basado en PCA. En el sistema basado en HOG se estudia qué parámetros son los adecuados para el descriptor en el caso particular de vehículos, proponiéndose descriptores eficientes basados en esta configuración, que pueden ser implementados en sistemas en tiempo real. Finalmente, con los resultados obtenidos en el paso previo se procede a un análisis para los distintos métodos presentando sus principales características y limitaciones.In this work, a vehicle verification systems using learning methods are proposed. First, a study of related work has been done. Afterwards, the arquitecture of these systems is explained. The arquitecure is divided in two stages: feature extraction and clasification. In the first stage, a brief summary of the different features that will be implemented (simmetry, edges, principal components analysis (PCA) and histograms of oriented gradients (HOG)) is given. The second stage is a theoretical explanation of the classifiers used in this system. Subsequently, the systems are developed with new improvements. Two different methods are proposed for the system based on symmetry. An improvement is introduced for the second method that is a differentiation between compounds axes by one and two pixels, also a penalty is introduced into the values of symmetry for greater differentiation between classes. Regarding the system based on edges, vertical edges are used, where the performance reducing the size of the vectors is analyzed. Moreover, the correlation matrix is used to develop the system based on PCA. In the system based on HOG, in the particular case of vehicles, appropiate parameters for the descriptor are studied, proposing efficient descriptors based on this configuration that can be implemented in real-time systems. Finally, the results obtained in the previous step are analyzed for each of the methods, and their main characteristics and limitations are described

    Primena inteligentnih sistema mašinske vizije autonomnog upravljanja železničkim vozilima

    Get PDF
    The railway is an important type of transport and has a significant economic impact on the industry and people's everyday life. Due to its capacities and complex infrastructure, it is necessary to work on its constant development and improvement. Railway automation requires the use of intelligent systems as a necessary part of an autonomous railway vehicle. As from the point of view of safe traffic, the existence of the object on the rail track and / or in its vicinity represents a potential obstacle to the railway traffic, and visibility has a very important role in correct and timely detection of the object on the railway infrastructure, a key element of autonomous railway vehicle is an obstacle detection system on the part of the railway infrastructure, in conditions of reduced visibility. The subject of scientific research of this doctoral dissertation is the application of intelligent machine vision systems in autonomous train operation. For the purpose of detecting obstacles on the part of the railway infrastructure in conditions of reduced visibility, a thermal imaging camera and a night vision system are integrated into the system, coupled with a developed advanced algorithm for image processing with artificial intelligence tools. In addition, the distance from the machine vision system to the detected object was estimated. The operation of the system was tested in a series of field experiments, at different locations, in different visibility conditions and weather conditions, through realistic scenarios
    corecore