3,643 research outputs found

    Computer Vision System-On-Chip Designs for Intelligent Vehicles

    Get PDF
    Intelligent vehicle technologies are growing rapidly that can enhance road safety, improve transport efficiency, and aid driver operations through sensors and intelligence. Advanced driver assistance system (ADAS) is a common platform of intelligent vehicle technologies. Many sensors like LiDAR, radar, cameras have been deployed on intelligent vehicles. Among these sensors, optical cameras are most widely used due to their low costs and easy installation. However, most computer vision algorithms are complicated and computationally slow, making them difficult to be deployed on power constraint systems. This dissertation investigates several mainstream ADAS applications, and proposes corresponding efficient digital circuits implementations for these applications. This dissertation presents three ways of software / hardware algorithm division for three ADAS applications: lane detection, traffic sign classification, and traffic light detection. Using FPGA to offload critical parts of the algorithm, the entire computer vision system is able to run in real time while maintaining a low power consumption and a high detection rate. Catching up with the advent of deep learning in the field of computer vision, we also present two deep learning based hardware implementations on application specific integrated circuits (ASIC) to achieve even lower power consumption and higher accuracy. The real time lane detection system is implemented on Xilinx Zynq platform, which has a dual core ARM processor and FPGA fabric. The Xilinx Zynq platform integrates the software programmability of an ARM processor with the hardware programmability of an FPGA. For the lane detection task, the FPGA handles the majority of the task: region-of-interest extraction, edge detection, image binarization, and hough transform. After then, the ARM processor takes in hough transform results and highlights lanes using the hough peaks algorithm. The entire system is able to process 1080P video stream at a constant speed of 69.4 frames per second, realizing real time capability. An efficient system-on-chip (SOC) design which classifies up to 48 traffic signs in real time is presented in this dissertation. The traditional histogram of oriented gradients (HoG) and support vector machine (SVM) are proven to be very effective on traffic sign classification with an average accuracy rate of 93.77%. For traffic sign classification, the biggest challenge comes from the low execution efficiency of the HoG on embedded processors. By dividing the HoG algorithm into three fully pipelined stages, as well as leveraging extra on-chip memory to store intermediate results, we successfully achieved a throughput of 115.7 frames per second at 1080P resolution. The proposed generic HoG hardware implementation could also be used as an individual IP core by other computer vision systems. A real time traffic signal detection system is implemented to present an efficient hardware implementation of the traditional grass-fire blob detection. The traditional grass-fire blob detection method iterates the input image multiple times to calculate connected blobs. In digital circuits, five extra on-chip block memories are utilized to save intermediate results. By using additional memories, all connected blob information could be obtained through one-pass image traverse. The proposed hardware friendly blob detection can run at 72.4 frames per second with 1080P video input. Applying HoG + SVM as feature extractor and classifier, 92.11% recall rate and 99.29% precision rate are obtained on red lights, and 94.44% recall rate and 98.27% precision rate on green lights. Nowadays, convolutional neural network (CNN) is revolutionizing computer vision due to learnable layer by layer feature extraction. However, when coming into inference, CNNs are usually slow to train and slow to execute. In this dissertation, we studied the implementation of principal component analysis based network (PCANet), which strikes a balance between algorithm robustness and computational complexity. Compared to a regular CNN, the PCANet only needs one iteration training, and typically at most has a few tens convolutions on a single layer. Compared to hand-crafted features extraction methods, the PCANet algorithm well reflects the variance in the training dataset and can better adapt to difficult conditions. The PCANet algorithm achieves accuracy rates of 96.8% and 93.1% on road marking detection and traffic light detection, respectively. Implementing in Synopsys 32nm process technology, the proposed chip can classify 724,743 32-by-32 image candidates in one second, with only 0.5 watt power consumption. In this dissertation, binary neural network (BNN) is adopted as a potential detector for intelligent vehicles. The BNN constrains all activations and weights to be +1 or -1. Compared to a CNN with the same network configuration, the BNN achieves 50 times better resource usage with only 1% - 2% accuracy loss. Taking car detection and pedestrian detection as examples, the BNN achieves an average accuracy rate of over 95%. Furthermore, a BNN accelerator implemented in Synopsys 32nm process technology is presented in our work. The elastic architecture of the BNN accelerator makes it able to process any number of convolutional layers with high throughput. The BNN accelerator only consumes 0.6 watt and doesn\u27t rely on external memory for storage

    Computer Vision System-On-Chip Designs for Intelligent Vehicles

    Get PDF
    Intelligent vehicle technologies are growing rapidly that can enhance road safety, improve transport efficiency, and aid driver operations through sensors and intelligence. Advanced driver assistance system (ADAS) is a common platform of intelligent vehicle technologies. Many sensors like LiDAR, radar, cameras have been deployed on intelligent vehicles. Among these sensors, optical cameras are most widely used due to their low costs and easy installation. However, most computer vision algorithms are complicated and computationally slow, making them difficult to be deployed on power constraint systems. This dissertation investigates several mainstream ADAS applications, and proposes corresponding efficient digital circuits implementations for these applications. This dissertation presents three ways of software / hardware algorithm division for three ADAS applications: lane detection, traffic sign classification, and traffic light detection. Using FPGA to offload critical parts of the algorithm, the entire computer vision system is able to run in real time while maintaining a low power consumption and a high detection rate. Catching up with the advent of deep learning in the field of computer vision, we also present two deep learning based hardware implementations on application specific integrated circuits (ASIC) to achieve even lower power consumption and higher accuracy. The real time lane detection system is implemented on Xilinx Zynq platform, which has a dual core ARM processor and FPGA fabric. The Xilinx Zynq platform integrates the software programmability of an ARM processor with the hardware programmability of an FPGA. For the lane detection task, the FPGA handles the majority of the task: region-of-interest extraction, edge detection, image binarization, and hough transform. After then, the ARM processor takes in hough transform results and highlights lanes using the hough peaks algorithm. The entire system is able to process 1080P video stream at a constant speed of 69.4 frames per second, realizing real time capability. An efficient system-on-chip (SOC) design which classifies up to 48 traffic signs in real time is presented in this dissertation. The traditional histogram of oriented gradients (HoG) and support vector machine (SVM) are proven to be very effective on traffic sign classification with an average accuracy rate of 93.77%. For traffic sign classification, the biggest challenge comes from the low execution efficiency of the HoG on embedded processors. By dividing the HoG algorithm into three fully pipelined stages, as well as leveraging extra on-chip memory to store intermediate results, we successfully achieved a throughput of 115.7 frames per second at 1080P resolution. The proposed generic HoG hardware implementation could also be used as an individual IP core by other computer vision systems. A real time traffic signal detection system is implemented to present an efficient hardware implementation of the traditional grass-fire blob detection. The traditional grass-fire blob detection method iterates the input image multiple times to calculate connected blobs. In digital circuits, five extra on-chip block memories are utilized to save intermediate results. By using additional memories, all connected blob information could be obtained through one-pass image traverse. The proposed hardware friendly blob detection can run at 72.4 frames per second with 1080P video input. Applying HoG + SVM as feature extractor and classifier, 92.11% recall rate and 99.29% precision rate are obtained on red lights, and 94.44% recall rate and 98.27% precision rate on green lights. Nowadays, convolutional neural network (CNN) is revolutionizing computer vision due to learnable layer by layer feature extraction. However, when coming into inference, CNNs are usually slow to train and slow to execute. In this dissertation, we studied the implementation of principal component analysis based network (PCANet), which strikes a balance between algorithm robustness and computational complexity. Compared to a regular CNN, the PCANet only needs one iteration training, and typically at most has a few tens convolutions on a single layer. Compared to hand-crafted features extraction methods, the PCANet algorithm well reflects the variance in the training dataset and can better adapt to difficult conditions. The PCANet algorithm achieves accuracy rates of 96.8% and 93.1% on road marking detection and traffic light detection, respectively. Implementing in Synopsys 32nm process technology, the proposed chip can classify 724,743 32-by-32 image candidates in one second, with only 0.5 watt power consumption. In this dissertation, binary neural network (BNN) is adopted as a potential detector for intelligent vehicles. The BNN constrains all activations and weights to be +1 or -1. Compared to a CNN with the same network configuration, the BNN achieves 50 times better resource usage with only 1% - 2% accuracy loss. Taking car detection and pedestrian detection as examples, the BNN achieves an average accuracy rate of over 95%. Furthermore, a BNN accelerator implemented in Synopsys 32nm process technology is presented in our work. The elastic architecture of the BNN accelerator makes it able to process any number of convolutional layers with high throughput. The BNN accelerator only consumes 0.6 watt and doesn\u27t rely on external memory for storage

    A machine learning approach for digital image restoration

    Get PDF
    This paper illustrates the process of image restoration in the sense of detecting images within a scanned document such as a photo album or scrapbook. The primary use case of this research is to accelerate the cropping process for the employees of Cinetis, a company based in Martigny, Switzerland that specializes in the digitalization of old media formats. In this paper, we will first summarize the state of the art in this field of research. This will include explanations of various techniques and algorithms involved with feature and document detection used by various digital companies

    Underwater Localization in Complex Environments

    Get PDF
    A capacidade de um veículo autónomo submarino (AUV) se localizar num ambiente complexo, bem como de extrair características relevantes do mesmo, é de grande importância para o sucesso da navegação. No entanto, esta tarefa é particularmente desafiante em ambientes subaquáticos devido à rápida atenuação sofrida pelos sinais de sistemas de posicionamento global ou outros sinais de radiofrequência, dispersão e reflexão, sendo assim necessário o uso de processos de filtragem. Ambiente complexo é definido aqui como um cenário com objetos destacados das paredes, por exemplo, o objeto pode ter uma certa variabilidade de orientação, portanto a sua posição nem sempre é conhecida. Exemplos de cenários podem ser um porto, um tanque ou mesmo uma barragem, onde existem paredes e dentro dessas paredes um AUV pode ter a necessidade de se localizar de acordo com os outros veículos na área e se posicionar em relação ao mesmo e analisá-lo. Os veículos autónomos empregam muitos tipos diferentes de sensores para localização e percepção dos seus ambientes e dependem dos computadores de bordo para realizar tarefas de direção autónoma. Para esta dissertação há um problema concreto a resolver, localizar um cabo suspenso numa coluna de água em uma região conhecida do mar e navegar de acordo com ela. Embora a posição do cabo no mundo seja bem conhecida, a dinâmica do cabo não permite saber exatamente onde ele está. Assim, para que o veículo se localize de acordo com este para que possa ser inspecionado, a localização deve ser baseada em sensores ópticos e acústicos. Este estudo explora o processamento e a análise de imagens óticas e acústicas, por meio dos dados adquiridos através de uma câmara e por um sonar de varrimento mecânico (MSIS),respetivamente, a fim de extrair características ambientais relevantes que possibilitem a estimação da localização do veículo. Os pontos de interesse extraídos de cada um dos sensores são utilizados para alimentar um estimador de posição, implementando um Filtro de Kalman Extendido (EKF), de modo a estimar a posição do cabo e através do feedback do filtro melhorar os processos de extração de pontos de interesse utilizados.The ability of an autonomous underwater vehicle (AUV) to locate itself in a complex environment as well as to detect relevant environmental features is of crucial importance for successful navigation. However, it's particularly challenging in underwater environments due to the rapid attenuation suffered by signals from global positioning systems or other radio frequency signals, dispersion and reflection thus needing a filtering process. Complex environment is defined here as a scenario with objects detached from the walls, for example the object can have a certain orientation variability therefore its position is not always known. Examples of scenarios can be a harbour, a tank or even a dam reservoir, where there are walls and within those walls an AUV may have the need to localize itself according to the other vehicles in the area and position itself relative to one to observe, analyse or scan it. Autonomous vehicles employ many different types of sensors for localization and perceiving their environments and they depend on the on-board computers to perform autonomous driving tasks. For this dissertation there is a concrete problem to solve, which is to locate a suspended cable in a water column in a known region in the sea and navigate according to it. Although the cable position in the world is well known, the cable dynamics does not allow knowing where it is exactly. So, in order to the vehicle localize itself according to it so it can be inspected, the localization has to be based on optical and acoustic sensors. This study explores the processing and analysis of optical and acoustic images, through the data acquired through a camera and by a mechanical scanning sonar (MSIS), respectively, in order to extract relevant environmental characteristics that allow the estimation of the location of the vehicle. The points of interest extracted from each of the sensors are used to feed a position estimator, by implementing an Extended Kalman Filter (EKF), in order to estimate the position of the cable and through the feedback of the filter improve the extraction processes of points of interest used

    Active Collaborative Ensemble Tracking

    Full text link
    A discriminative ensemble tracker employs multiple classifiers, each of which casts a vote on all of the obtained samples. The votes are then aggregated in an attempt to localize the target object. Such method relies on collective competence and the diversity of the ensemble to approach the target/non-target classification task from different views. However, by updating all of the ensemble using a shared set of samples and their final labels, such diversity is lost or reduced to the diversity provided by the underlying features or internal classifiers' dynamics. Additionally, the classifiers do not exchange information with each other while striving to serve the collective goal, i.e., better classification. In this study, we propose an active collaborative information exchange scheme for ensemble tracking. This, not only orchestrates different classifier towards a common goal but also provides an intelligent update mechanism to keep the diversity of classifiers and to mitigate the shortcomings of one with the others. The data exchange is optimized with regard to an ensemble uncertainty utility function, and the ensemble is updated via co-training. The evaluations demonstrate promising results realized by the proposed algorithm for the real-world online tracking.Comment: AVSS 2017 Submissio
    corecore