35,782 research outputs found

    Does Familiarity breed inattention? Why drivers crash on the roads they know best

    Get PDF
    This paper describes our research into the nature of everyday driving, with a particular emphasis on the processes that govern driver behaviour in familiar, well - practiced situations. The research examined the development and maintenance of proceduralised driving habits in a high-fidelity driving simulator by paying 29 participants to drive a simulated road regularly over three months of testing. A range of measures, including detection task performance and driving performance were collected over the course of 20 sessions. Performance from a yoked control group who experienced the same road scenarios in a single session was also measured. The data showed the development of stereotyped driving patterns and changes in what drivers noticed, indicative of in attentional blindness and “driving without awareness”. Extended practice also resulted in increased sensitivity for detecting changes to foveal road features associated with vehicle guidance and performance on an embedded vehicle detection task (detection of a specific vehicle type). The changes in attentional focus and driving performance resulting from extended practice help explain why drivers are at increased risk of crashing on roads they know well. Identifying the features of familiar roads that attract driver attention, even when they are driving without awareness, can inform new interventions and designs for safer roads. The data also provide new light on a range of previous driver behaviour research including a “Tandem Model” that includes both explicit and implicit processes involved in driving performance

    Dynamic Vision Sensor integration on FPGA-based CNN accelerators for high-speed visual classification

    Get PDF
    Deep-learning is a cutting edge theory that is being applied to many fields. For vision applications the Convolutional Neural Networks (CNN) are demanding significant accuracy for classification tasks. Numerous hardware accelerators have populated during the last years to improve CPU or GPU based solutions. This technology is commonly prototyped and tested over FPGAs before being considered for ASIC fabrication for mass production. The use of commercial typical cameras (30fps) limits the capabilities of these systems for high speed applications. The use of dynamic vision sensors (DVS) that emulate the behavior of a biological retina is taking an incremental importance to improve this applications due to its nature, where the information is represented by a continuous stream of spikes and the frames to be processed by the CNN are constructed collecting a fixed number of these spikes (called events). The faster an object is, the more events are produced by DVS, so the higher is the equivalent frame rate. Therefore, these DVS utilization allows to compute a frame at the maximum speed a CNN accelerator can offer. In this paper we present a VHDL/HLS description of a pipelined design for FPGA able to collect events from an Address-Event-Representation (AER) DVS retina to obtain a normalized histogram to be used by a particular CNN accelerator, called NullHop. VHDL is used to describe the circuit, and HLS for computation blocks, which are used to perform the normalization of a frame needed for the CNN. Results outperform previous implementations of frames collection and normalization using ARM processors running at 800MHz on a Zynq7100 in both latency and power consumption. A measured 67% speedup factor is presented for a Roshambo CNN real-time experiment running at 160fps peak rate.Comment: 7 page

    Synchronous wearable wireless body sensor network composed of autonomous textile nodes

    Get PDF
    A novel, fully-autonomous, wearable, wireless sensor network is presented, where each flexible textile node performs cooperative synchronous acquisition and distributed event detection. Computationally efficient situational-awareness algorithms are implemented on the low-power microcontroller present on each flexible node. The detected events are wirelessly transmitted to a base station, directly, as well as forwarded by other on-body nodes. For each node, a dual-polarized textile patch antenna serves as a platform for the flexible electronic circuitry. Therefore, the system is particularly suitable for comfortable and unobtrusive integration into garments. In the meantime, polarization diversity can be exploited to improve the reliability and energy-efficiency of the wireless transmission. Extensive experiments in realistic conditions have demonstrated that this new autonomous, body-centric, textile-antenna, wireless sensor network is able to correctly detect different operating conditions of a firefighter during an intervention. By relying on four network nodes integrated into the protective garment, this functionality is implemented locally, on the body, and in real time. In addition, the received sensor data are reliably transferred to a central access point at the command post, for more detailed and more comprehensive real-time visualization. This information provides coordinators and commanders with situational awareness of the entire rescue operation. A statistical analysis of measured on-body node-to-node, as well as off-body person-to-person channels is included, confirming the reliability of the communication system

    A Novel Translation, Rotation, and Scale-Invariant Shape Description Method for Real-Time Speed-Limit Sign Recognition

    Get PDF
    [[abstract]]Speed-limit sign (SLS) recognition is an important function to realize automatic driving assistance systems (ADAS). This paper presents a novel design of an image-based SLS recognition algorithm, which can efficiently detect and recognize SLS in real-time. To improve the robustness of the proposed SLS algorithm, this paper also proposes a new shape description method to describe the detected SLS using centroid-to-contour (CtC) distances of the sign content. The proposed CtC descriptor is invariant to translation, rotation, and scale variations of the SLS in the image. This advantage increases the recognition rate of a linear support vector machine classifier. The proposed SLS recognition method had been implemented and tested on an ARM-based embedded platform. Experimental results validate the SLS recognition accuracy and real-time performance of the proposed method.[[notice]]èŁœæ­ŁćźŒ

    Optimization of deep learning algorithms for an autonomous RC vehicle

    Get PDF
    Dissertação de mestrado em Engenharia InformĂĄticaThis dissertation aims to evaluate and improve the performance of deep learning (DL) algorithms to autonomously drive a vehicle, using a Remo Car (an RC vehicle) as testbed. The RC vehicle was built with a 1:10 scaled remote controlled car and fitted with an embedded system and a video camera to capture and process real-time image data. Two different embedded systems were comparatively evaluated: an homogeneous system, a Raspberry Pi 4, and an heterogeneous system, a NVidia Jetson Nano. The Raspberry Pi 4 with an advanced 4-core ARM device supports multiprocessing, while the Jetson Nano, also with a 4-core ARM device, has an integrated accelerator, a 128 CUDA-core NVidia GPU. The captured video is processed with convolutional neural networks (CNNs), which interpret image data of the vehicle’s surroundings and predict critical data, such as lane view and steering angle, to provide mechanisms to drive on its own, following a predefined path. To improve the driving performance of the RC vehicle, this work analysed the programmed DL algorithms, namely different computer vision approaches for object detection and image classification, aiming to explore DL techniques and improve their performance at the inference phase. The work also analysed the computational efficiency of the control software, while running intense and complex deep learning tasks in the embedded devices, and fully explored the advanced characteristics and instructions provided by the two embedded systems in the vehicle. Different machine learning (ML) libraries and frameworks were analysed and evaluated: TensorFlow, TensorFlow Lite, Arm NN, PyArmNN and TensorRT. They play a key role to deploy the relevant algorithms and to fully engage the hardware capabilities. The original algorithm was successfully optimized and both embedded systems could perfectly handle this workload. To understand the computational limits of both devices, an additional and heavy DL algorithm was developed that aimed to detect traffic signs. The homogeneous system, the Raspberry Pi 4, could not deliver feasible low-latency values, hence the detection of traffic signs was not possible in real-time. However, a great performance improvement was achieved using the heterogeneous system, Jetson Nano, enabling their CUDA-cores to process the additional workload.Esta dissertação tem como objetivo avaliar e melhorar o desempenho de algoritmos de deep learning (DL) orientados Ă  condução autĂłnoma de veĂ­culos, usando um carro controlado remotamente como ambiente de teste. O carro foi construĂ­do usando um modelo de um veĂ­culo de controlo remoto de escala 1:10, onde foi colocado um sistema embebido e uma cĂąmera de vĂ­deo para capturar e processar imagem em tempo real. Dois sistemas embebidos foram comparativamente avaliados: um sistema homogĂ©neo, um Raspberry Pi 4, e um sistema heterogĂ©neo, uma NVidia Jetson Nano. O Raspberry Pi 4 possui um processador ARM com 4 nĂșcleos, suportando multiprocessamento. A Jetson Nano, tambĂ©m com um processador ARM de 4 nĂșcleos, possui uma unidade adicional de processamento com 128 nĂșcleos do tipo CUDA-core. O vĂ­deo capturado e processado usando redes neuronais convolucionais (CNN), interpretando o meio envolvente do veĂ­culo e prevendo dados cruciais, como a visibilidade da linha da estrada e o angulo de direção, de forma a que o veĂ­culo consiga conduzir de forma autĂłnoma num determinado ambiente. De forma a melhorar o desempenho da condução autĂłnoma do veĂ­culo, diferentes algoritmos de deep learning foram analisados, nomeadamente diferentes abordagens de visĂŁo por computador para detecção e classificação de imagens, com o objetivo de explorar tĂ©cnicas de CNN e melhorar o seu desempenho na fase de inferĂȘncia. A dissertação tambĂ©m analisou a eficiĂȘncia computacional do software usado para a execução de tarefas de aprendizagem profunda intensas e complexas nos dispositivos embebidos, e explorou completamente as caracterĂ­sticas avançadas e as instruçÔes fornecidas pelos dois sistemas embebidos no veĂ­culo. Diferentes bibliotecas e frameworks de machine learning foram analisadas e avaliadas: TensorFlow, TensorFlow Lite, Arm NN, PyArmNN e TensorRT. Estes desempenham um papel fulcral no provisionamento dos algoritmos de deep learning para tirar mĂĄximo partido das capacidades do hardware usado. O algoritmo original foi otimizado com sucesso e ambos os sistemas embebidos conseguiram executar os algoritmos com pouco esforço. Assim, para entender os limites computacionais de ambos os dispositivos, um algoritmo adicional mais complexo de deep learning foi desenvolvido com o objetivo de detectar sinais de transito. O sistema homogĂ©neo, o Raspberry Pi 4, nĂŁo conseguiu entregar valores viĂĄveis de baixa latĂȘncia, portanto, a detecção de sinais de trĂąnsito nĂŁo foi possĂ­vel em tempo real, usando este sistema. No entanto, foi alcançada uma grande melhoria de desempenho usando o sistema heterogeneo, Jetson Nano, que usaram os seus nĂșcleos CUDA adicionais para processar a carga computacional mais intensa

    VANET Applications: Hot Use Cases

    Get PDF
    Current challenges of car manufacturers are to make roads safe, to achieve free flowing traffic with few congestions, and to reduce pollution by an effective fuel use. To reach these goals, many improvements are performed in-car, but more and more approaches rely on connected cars with communication capabilities between cars, with an infrastructure, or with IoT devices. Monitoring and coordinating vehicles allow then to compute intelligent ways of transportation. Connected cars have introduced a new way of thinking cars - not only as a mean for a driver to go from A to B, but as smart cars - a user extension like the smartphone today. In this report, we introduce concepts and specific vocabulary in order to classify current innovations or ideas on the emerging topic of smart car. We present a graphical categorization showing this evolution in function of the societal evolution. Different perspectives are adopted: a vehicle-centric view, a vehicle-network view, and a user-centric view; described by simple and complex use-cases and illustrated by a list of emerging and current projects from the academic and industrial worlds. We identified an empty space in innovation between the user and his car: paradoxically even if they are both in interaction, they are separated through different application uses. Future challenge is to interlace social concerns of the user within an intelligent and efficient driving
    • 

    corecore