277 research outputs found

    Vehicle detection using improved region convolution neural network for accident prevention in smart roads

    Get PDF
    This paper explores the vehicle detection problem and introduces an improved regional convolution neural network. The vehicle data (set of images) is first collected, from which the noise (set of outlier images) is removed using the SIFT extractor. The region convolution neural network is then used to detect the vehicles. We propose a new hyper-parameters optimization model based on evolutionary computation that can be used to tune parameters of the deep learning framework. The proposed solution was tested using the well-known boxy vehicle detection data, which contains more than 200,000 vehicle images and 1,990,000 annotated vehicles. The results are very promising and show superiority over many current state-of-the-art solutions in terms of runtime and accuracy performances.publishedVersio

    Cityscapes 3D: Dataset and Benchmark for 9 DoF Vehicle Detection

    Full text link
    Detecting vehicles and representing their position and orientation in the three dimensional space is a key technology for autonomous driving. Recently, methods for 3D vehicle detection solely based on monocular RGB images gained popularity. In order to facilitate this task as well as to compare and drive state-of-the-art methods, several new datasets and benchmarks have been published. Ground truth annotations of vehicles are usually obtained using lidar point clouds, which often induces errors due to imperfect calibration or synchronization between both sensors. To this end, we propose Cityscapes 3D, extending the original Cityscapes dataset with 3D bounding box annotations for all types of vehicles. In contrast to existing datasets, our 3D annotations were labeled using stereo RGB images only and capture all nine degrees of freedom. This leads to a pixel-accurate reprojection in the RGB image and a higher range of annotations compared to lidar-based approaches. In order to ease multitask learning, we provide a pairing of 2D instance segments with 3D bounding boxes. In addition, we complement the Cityscapes benchmark suite with 3D vehicle detection based on the new annotations as well as metrics presented in this work. Dataset and benchmark are available online.Comment: 2020 "Scalability in Autonomous Driving" CVPR Worksho

    Hybrid RESNET and Regional Convolution Neural Network Framework for Accident Estimation in Smart Roads

    Get PDF
    Road safety is tackled and an intelligent deep learning framework is proposed in this work, which includes outlier detection, vehicle detection, and accident estimation. The road state is first collected, while an intelligent filter, based on SIFT extractor and a Chinese restaurant process is used to remove noise. The extended region-based convolution neural network is then applied to identify the closest vehicles to the given driver. The residual network will benefit from the vehicle detection process to make a binary classification on whether the current road state might cause an accident or not. Finally, we propose a novel optimization model for optimizing hyper-parameters in deep learning methodologies by using evolutionary computation. The proposed solution has been tested using benchmark vehicle detection and accident estimation datasets. The results are very promising and show superiority over many current state-of-the-art solutions in terms of runtime and accuracy, where the proposed solution has more than 5% of improved accident estimation rate compared to the conventional methods.acceptedVersio

    Towards Explainability of UAV-Based Convolutional Neural Networks for Object Classification

    Get PDF
    f autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR), a new NASA Convergent Aeronautical Solutions (CAS) Project. One critical research element of ATTRACTOR is explainability of the decision-making across relevant subsystems of an autonomous system. The ability to explain why an autonomous system makes a decision is needed to establish a basis of trustworthiness to safely complete a mission. Convolutional Neural Networks (CNNs) are popular visual object classifiers that have achieved high levels of classification performances without clear insight into the mechanisms of the internal layers and features. To explore the explainability of the internal components of CNNs, we reviewed three feature visualization methods in a layer-by-layer approach using aviation related images as inputs. Our approach to this is to analyze the key components of a classification event in order to generate component labels for features of the classified image at different layers of depths. For example, an airplane has wings, engines, and landing gear. These could possibly be identified somewhere in the hidden layers from the classification and these descriptive labels could be provided to a human or machine teammate while conducting a shared mission and to engender trust. Each descriptive feature may also be decomposed to a combination of primitives such as shapes and lines. We expect that knowing the combination of shapes and parts that create a classification will enable trust in the system and insight into creating better structures for the CNN

    Vehicle detection using improved region convolution neural network for accident prevention in smart roads

    Get PDF
    This paper explores the vehicle detection problem and introduces an improved regional convolution neural network. The vehicle data (set of images) is first collected, from which the noise (set of outlier images) is removed using the SIFT extractor. The region convolution neural network is then used to detect the vehicles. We propose a new hyper-parameters optimization model based on evolutionary computation that can be used to tune parameters of the deep learning framework. The proposed solution was tested using the well-known boxy vehicle detection data, which contains more than 200,000 vehicle images and 1,990,000 annotated vehicles. The results are very promising and show superiority over many current state-of-the-art solutions in terms of runtime and accuracy performances

    A Concept for Deployment and Evaluation of Unsupervised Domain Adaptation in Cognitive Perception Systems

    Get PDF
    Jüngste Entwicklungen im Bereich des tiefen Lernens ermöglichen Perzeptionssystemen datengetrieben Wissen über einen vordefinierten Betriebsbereich, eine sogenannte Domäne, zu gewinnen. Diese Verfahren des überwachten Lernens werden durch das Aufkommen groß angelegter annotierter Datensätze und immer leistungsfähigerer Prozessoren vorangetrieben und zeigen unübertroffene Performanz bei Perzeptionsaufgaben in einer Vielzahl von Anwendungsbereichen.Jedoch sind überwacht-trainierte neuronale Netze durch die Menge an verfügbaren annotierten Daten limitiert und dies wiederum findet in einem begrenzten Betriebsbereich Ausdruck. Dabei beruht überwachtes Lernen stark auf manuell durchzuführender Datenannotation. Insbesondere durch die ständig steigende Verfügbarkeit von nicht annotierten großen Datenmengen ist der Gebrauch von unüberwachter Domänenanpassung entscheidend. Verfahren zur unüberwachten Domänenanpassung sind meist nicht geeignet, um eine notwendige Inbetriebnahme des neuronalen Netzes in einer zusätzlichen Domäne zu gewährleisten. Darüber hinaus sind vorhandene Metriken häufig unzureichend für eine auf die Anwendung der domänenangepassten neuronalen Netzen ausgerichtete Validierung. Der Hauptbeitrag der vorliegenden Dissertation besteht aus neuen Konzepten zur unüberwachten Domänenanpassung. Basierend auf einer Kategorisierung von Domänenübergängen und a priori verfügbaren Wissensrepräsentationen durch ein überwacht-trainiertes neuronales Netz wird eine unüberwachte Domänenanpassung auf nicht annotierten Daten ermöglicht. Um die kontinuierliche Bereitstellung von neuronalen Netzen für die Anwendung in der Perzeption zu adressieren, wurden neuartige Verfahren speziell für die unüberwachte Erweiterung des Betriebsbereichs eines neuronalen Netzes entwickelt. Beispielhafte Anwendungsfälle des Fahrzeugsehens zeigen, wie die neuartigen Verfahren kombiniert mit neu entwickelten Metriken zur kontinuierlichen Inbetriebnahme von neuronalen Netzen auf nicht annotierten Daten beitragen. Außerdem werden die Implementierungen aller entwickelten Verfahren und Algorithmen dargestellt und öffentlich zugänglich gemacht. Insbesondere wurden die neuartigen Verfahren erfolgreich auf die unüberwachte Domänenanpassung, ausgehend von der Tag- auf die Nachtobjekterkennung im Bereich des Fahrzeugsehens angewendet

    Vehicle 3D Pose Estimation form Traffic Cameras

    Get PDF
    Cílem této bakalářské práce je vytvořit metodu pro odhad 3D pozice vozidel z dopravních kamer. V práci jsou probrány existující metody pro detekci a odhad pozice vozidel. Součástí práce je i sestavení datové sady pro trénování a experimenty nad navrženou metodou pro odhad pozice vozidel. Navržená metoda používá konvoluční neuronovou síť pro regresi podstavy vozidla na obrázku. Pozice vozidla je poté promítnuta do roviny silnice pomocí homografie. Experimenty shrnují trénovaní a vyhodnocení metody pro odhad pozice a přesnosti ruční anotace pozice.The goal of this bachelor thesis is to create a method for the 3D pose estimation of vehicles from traffic cameras. Existing methods for the car detection and the pose estimation of vehicles are described. Part of the thesis was to build a dataset for the purpose of training and experiments on the proposed car pose estimation method. Proposed method uses a convolutional neural network for regression of the car base in the image. Car pose is then projected into the road plane using homography. Experiments summarize training and the evaluation of the car pose estimation method and accuracy of manual vehicle annotation.

    An intra-vehicular wireless multimedia sensor network for smartphone-based low-cost advanced driver-assistance systems

    Get PDF
    Advanced driver-assistance system(s) (ADAS) are more prevalent in high-end vehicles than in low-end vehicles. Wired solutions of vision sensors in ADAS already exist, but are costly and do not cater for low-end vehicles. General ADAS use wired harnessing for communication; this approach eliminates the need for cable harnessing and, therefore, the practicality of a novel wireless ADAS solution was tested. A low-cost alternative is proposed that extends a smartphone’s sensor perception, using a camera-based wireless sensor network. This paper presents the design of a low-cost ADAS alternative that uses an intra-vehicle wireless sensor network structured by a Wi-Fi Direct topology, using a smartphone as the processing platform. The proposed system makes ADAS features accessible to cheaper vehicles and investigates the possibility of using a wireless network to communicate ADAS information in a intra-vehicle environment. Other ADAS smartphone approaches make use of a smartphone’s onboard sensors; however, this paper shows the application of essential ADAS features developed on the smartphone’s ADAS application, carrying out both lane detection and collision detection on a vehicle by using wireless sensor data. A smartphone’s processing power was harnessed and used as a generic object detector through a convolution neural network, using the sensory network’s video streams. The network’s performance was analysed to ensure that the network could carry out detection in real-time. A low-cost CMOS camera sensor network with a smartphone found an application, using Wi-Fi Direct, to create an intra-vehicle wireless network as a low-cost advanced driver-assistance system.DATA AVAILABLITY STATEMENT : Publicly available datasets were analysed in this study. There data can be found here: https://github.com/TuSimple/tusimple-benchmark and https://boxy-dataset.com/ boxy/ accessed on 25 November 2021.https://www.mdpi.com/journal/sensorsam2023Electrical, Electronic and Computer Engineerin
    corecore