295 research outputs found

    Analyzing Performance Effects of Neural Networks Applied to Lane Recognition under Various Environmental Driving Conditions

    Get PDF
    Acknowledgments: Authors would like to thank the Université du Québec à Trois-Rivières and the Institut de recherche sur l’hydrogène for their collaboration and assistance.Lane detection is an essential module for the safe navigation of autonomous vehicles (AVs). Estimating the vehicle’s position and trajectory on the road is critical; however, several environmental variables can affect this task. State-of-the-art lane detection methods utilize convolutional neural networks (CNNs) as feature extractors to obtain relevant features through training using multiple kernel layers. It makes them vulnerable to any statistical change in the input data or noise affecting the spatial characteristics. In this paper, we compare six different CNN architectures to analyze the effect of various adverse conditions, including harsh weather, illumination variations, and shadows/occlusions, on lane detection. Among all the aforementioned adverse conditions, harsh weather in general and snowy night conditions particularly affect the performance by a large margin. The average detection accuracy of the networks decreased by 75.2%, and the root mean square error (RMSE) increased by 301.1%. Overall, the results show a noticeable drop in the networks’ accuracy for all adverse conditions because the features’ stochastic distributions change for each state.Natural Sciences and Engineering Research Council of CanadaCanada Research Chair

    Towards lightweight convolutional neural networks for object detection

    Full text link
    We propose model with larger spatial size of feature maps and evaluate it on object detection task. With the goal to choose the best feature extraction network for our model we compare several popular lightweight networks. After that we conduct a set of experiments with channels reduction algorithms in order to accelerate execution. Our vehicle detection models are accurate, fast and therefore suit for embedded visual applications. With only 1.5 GFLOPs our best model gives 93.39 AP on validation subset of challenging DETRAC dataset. The smallest of our models is the first to achieve real-time inference speed on CPU with reasonable accuracy drop to 91.43 AP.Comment: Submitted to the International Workshop on Traffic and Street Surveillance for Safety and Security (IWT4S) in conjunction with the 14th IEEE International Conference on Advanced Video and Signal based Surveillance (AVSS 2017

    Domain Adaptation with Joint Learning for Generic, Optical Car Part Recognition and Detection Systems (Go-CaRD)

    Full text link
    Systems for the automatic recognition and detection of automotive parts are crucial in several emerging research areas in the development of intelligent vehicles. They enable, for example, the detection and modelling of interactions between human and the vehicle. In this paper, we quantitatively and qualitatively explore the efficacy of deep learning architectures for the classification and localisation of 29 interior and exterior vehicle regions on three novel datasets. Furthermore, we experiment with joint and transfer learning approaches across datasets and point out potential applications of our systems. Our best network architecture achieves an F1 score of 93.67 % for recognition, while our best localisation approach utilising state-of-the-art backbone networks achieve a mAP of 63.01 % for detection. The MuSe-CAR-Part dataset, which is based on a large variety of human-car interactions in videos, the weights of the best models, and the code is publicly available to academic parties for benchmarking and future research.Comment: Demonstration and instructions to obtain data and models: https://github.com/lstappen/GoCar
    • …
    corecore