1,655 research outputs found

    Optimization of deep learning algorithms for an autonomous RC vehicle

    Get PDF
    Dissertação de mestrado em Engenharia InformáticaThis dissertation aims to evaluate and improve the performance of deep learning (DL) algorithms to autonomously drive a vehicle, using a Remo Car (an RC vehicle) as testbed. The RC vehicle was built with a 1:10 scaled remote controlled car and fitted with an embedded system and a video camera to capture and process real-time image data. Two different embedded systems were comparatively evaluated: an homogeneous system, a Raspberry Pi 4, and an heterogeneous system, a NVidia Jetson Nano. The Raspberry Pi 4 with an advanced 4-core ARM device supports multiprocessing, while the Jetson Nano, also with a 4-core ARM device, has an integrated accelerator, a 128 CUDA-core NVidia GPU. The captured video is processed with convolutional neural networks (CNNs), which interpret image data of the vehicle’s surroundings and predict critical data, such as lane view and steering angle, to provide mechanisms to drive on its own, following a predefined path. To improve the driving performance of the RC vehicle, this work analysed the programmed DL algorithms, namely different computer vision approaches for object detection and image classification, aiming to explore DL techniques and improve their performance at the inference phase. The work also analysed the computational efficiency of the control software, while running intense and complex deep learning tasks in the embedded devices, and fully explored the advanced characteristics and instructions provided by the two embedded systems in the vehicle. Different machine learning (ML) libraries and frameworks were analysed and evaluated: TensorFlow, TensorFlow Lite, Arm NN, PyArmNN and TensorRT. They play a key role to deploy the relevant algorithms and to fully engage the hardware capabilities. The original algorithm was successfully optimized and both embedded systems could perfectly handle this workload. To understand the computational limits of both devices, an additional and heavy DL algorithm was developed that aimed to detect traffic signs. The homogeneous system, the Raspberry Pi 4, could not deliver feasible low-latency values, hence the detection of traffic signs was not possible in real-time. However, a great performance improvement was achieved using the heterogeneous system, Jetson Nano, enabling their CUDA-cores to process the additional workload.Esta dissertação tem como objetivo avaliar e melhorar o desempenho de algoritmos de deep learning (DL) orientados à condução autónoma de veículos, usando um carro controlado remotamente como ambiente de teste. O carro foi construído usando um modelo de um veículo de controlo remoto de escala 1:10, onde foi colocado um sistema embebido e uma câmera de vídeo para capturar e processar imagem em tempo real. Dois sistemas embebidos foram comparativamente avaliados: um sistema homogéneo, um Raspberry Pi 4, e um sistema heterogéneo, uma NVidia Jetson Nano. O Raspberry Pi 4 possui um processador ARM com 4 núcleos, suportando multiprocessamento. A Jetson Nano, também com um processador ARM de 4 núcleos, possui uma unidade adicional de processamento com 128 núcleos do tipo CUDA-core. O vídeo capturado e processado usando redes neuronais convolucionais (CNN), interpretando o meio envolvente do veículo e prevendo dados cruciais, como a visibilidade da linha da estrada e o angulo de direção, de forma a que o veículo consiga conduzir de forma autónoma num determinado ambiente. De forma a melhorar o desempenho da condução autónoma do veículo, diferentes algoritmos de deep learning foram analisados, nomeadamente diferentes abordagens de visão por computador para detecção e classificação de imagens, com o objetivo de explorar técnicas de CNN e melhorar o seu desempenho na fase de inferência. A dissertação também analisou a eficiência computacional do software usado para a execução de tarefas de aprendizagem profunda intensas e complexas nos dispositivos embebidos, e explorou completamente as características avançadas e as instruções fornecidas pelos dois sistemas embebidos no veículo. Diferentes bibliotecas e frameworks de machine learning foram analisadas e avaliadas: TensorFlow, TensorFlow Lite, Arm NN, PyArmNN e TensorRT. Estes desempenham um papel fulcral no provisionamento dos algoritmos de deep learning para tirar máximo partido das capacidades do hardware usado. O algoritmo original foi otimizado com sucesso e ambos os sistemas embebidos conseguiram executar os algoritmos com pouco esforço. Assim, para entender os limites computacionais de ambos os dispositivos, um algoritmo adicional mais complexo de deep learning foi desenvolvido com o objetivo de detectar sinais de transito. O sistema homogéneo, o Raspberry Pi 4, não conseguiu entregar valores viáveis de baixa latência, portanto, a detecção de sinais de trânsito não foi possível em tempo real, usando este sistema. No entanto, foi alcançada uma grande melhoria de desempenho usando o sistema heterogeneo, Jetson Nano, que usaram os seus núcleos CUDA adicionais para processar a carga computacional mais intensa

    Delicar: A smart deep learning based self driving product delivery car in perspective of Bangladesh

    Get PDF
    The rapid expansion of a country’s economy is highly dependent on timely product distribution, which is hampered by terrible traffic congestion. Additional staff are also required to follow the delivery vehicle while it transports documents or records to another destination. This study proposes Delicar, a self-driving product delivery vehicle that can drive the vehicle on the road and report the current geographical location to the authority in real-time through a map. The equipped camera module captures the road image and transfers it to the computer via socket server programming. The raspberry pi sends the camera image and waits for the steering angle value. The image is fed to the pre-trained deep learning model that predicts the steering angle regarding that situation. Then the steering angle value is passed to the raspberry pi that directs the L298 motor driver which direction the wheel should follow. Based upon this direction, L298 decides either forward or left or right or backwards movement. The 3-cell 12V LiPo battery handles the power supply to the raspberry pi and L298 motor driver. A buck converter regulates a 5V 3A power supply to the raspberry pi to be working. Nvidia CNN architecture has been followed, containing nine layers including five convolution layers and three dense layers to develop the steering angle predictive model. Geoip2 (a python library) retrieves the longitude and latitude from the equipped system’s IP address to report the live geographical position to the authorities. After that, Folium is used to depict the geographical location. Moreover, the system’s infrastructure is far too low-cost and easy to install.publishedVersio

    Autonomous Driving with a Simulation Trained Convolutional Neural Network

    Get PDF
    Autonomous vehicles will help society if they can easily support a broad range of driving environments, conditions, and vehicles. Achieving this requires reducing the complexity of the algorithmic system, easing the collection of training data, and verifying operation using real-world experiments. Our work addresses these issues by utilizing a reflexive neural network that translates images into steering and throttle commands. This network is trained using simulation data from Grand Theft Auto V~\cite{gtav}, which we augment to reduce the number of simulation hours driven. We then validate our work using a RC car system through numerous tests. Our system successfully drive 98 of 100 laps of a track with multiple road types and difficult turns; it also successfully avoids collisions with another vehicle in 90\% of the trials

    F1/10: An Open-Source Autonomous Cyber-Physical Platform

    Get PDF
    In 2005 DARPA labeled the realization of viable autonomous vehicles (AVs) a grand challenge; a short time later the idea became a moonshot that could change the automotive industry. Today, the question of safety stands between reality and solved. Given the right platform the CPS community is poised to offer unique insights. However, testing the limits of safety and performance on real vehicles is costly and hazardous. The use of such vehicles is also outside the reach of most researchers and students. In this paper, we present F1/10: an open-source, affordable, and high-performance 1/10 scale autonomous vehicle testbed. The F1/10 testbed carries a full suite of sensors, perception, planning, control, and networking software stacks that are similar to full scale solutions. We demonstrate key examples of the research enabled by the F1/10 testbed, and how the platform can be used to augment research and education in autonomous systems, making autonomy more accessible

    END-TO-END LEARNING UTILIZING TEMPORAL INFORMATION FOR VISION- BASED AUTONOMOUS DRIVING

    Get PDF
    End-to-End learning models trained with conditional imitation learning (CIL) have demonstrated their capabilities in driving autonomously in dynamic environments. The performance of such models however is limited as most of them fail to utilize the temporal information, which resides in a sequence of observations. In this work, we explore the use of temporal information with a recurrent network to improve driving performance. We propose a model that combines a pre-trained, deeper convolutional neural network to better capture image features with a long short-term memory network to better explore temporal information. Experimental results indicate that the proposed model achieves performance gain in several tasks in the CARLA benchmark, compared to the state-of-the-art models. In particular, comparing with other CIL-based models in the most challenging task, navigation in dynamic environments, we achieve a 96% success rate while other CIL-based models had 82-92% in training conditions; we also achieved 88% while other CIL-based models did 42-90% in the new town and new weather conditions. The subsequent ablation study also shows that all the major features of the proposed model are essential for improving performance. We, therefore, believe that this work contributes significantly towards safe, efficient, clean autonomous driving for future smart cities
    • …
    corecore