1,090 research outputs found
Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions
In the past decade, Convolutional Neural Networks (CNNs) have demonstrated
state-of-the-art performance in various Artificial Intelligence tasks. To
accelerate the experimentation and development of CNNs, several software
frameworks have been released, primarily targeting power-hungry CPUs and GPUs.
In this context, reconfigurable hardware in the form of FPGAs constitutes a
potential alternative platform that can be integrated in the existing deep
learning ecosystem to provide a tunable balance between performance, power
consumption and programmability. In this paper, a survey of the existing
CNN-to-FPGA toolflows is presented, comprising a comparative study of their key
characteristics which include the supported applications, architectural
choices, design space exploration methods and achieved performance. Moreover,
major challenges and objectives introduced by the latest trends in CNN
algorithmic research are identified and presented. Finally, a uniform
evaluation methodology is proposed, aiming at the comprehensive, complete and
in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal,
201
Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
In order to improve usability and safety, modern unmanned aerial vehicles
(UAVs) are equipped with sensors to monitor the environment, such as
laser-scanners and cameras. One important aspect in this monitoring process is
to detect obstacles in the flight path in order to avoid collisions. Since a
large number of consumer UAVs suffer from tight weight and power constraints,
our work focuses on obstacle avoidance based on a lightweight stereo camera
setup. We use disparity maps, which are computed from the camera images, to
locate obstacles and to automatically steer the UAV around them. For disparity
map computation we optimize the well-known semi-global matching (SGM) approach
for the deployment on an embedded FPGA. The disparity maps are then converted
into simpler representations, the so called U-/V-Maps, which are used for
obstacle detection. Obstacle avoidance is based on a reactive approach which
finds the shortest path around the obstacles as soon as they have a critical
distance to the UAV. One of the fundamental goals of our work was the reduction
of development costs by closing the gap between application development and
hardware optimization. Hence, we aimed at using high-level synthesis (HLS) for
porting our algorithms, which are written in C/C++, to the embedded FPGA. We
evaluated our implementation of the disparity estimation on the KITTI Stereo
2015 benchmark. The integrity of the overall realtime reactive obstacle
avoidance algorithm has been evaluated by using Hardware-in-the-Loop testing in
conjunction with two flight simulators.Comment: Accepted in the International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Scienc
Real-Time Dense Stereo Matching With ELAS on FPGA Accelerated Embedded Devices
For many applications in low-power real-time robotics, stereo cameras are the
sensors of choice for depth perception as they are typically cheaper and more
versatile than their active counterparts. Their biggest drawback, however, is
that they do not directly sense depth maps; instead, these must be estimated
through data-intensive processes. Therefore, appropriate algorithm selection
plays an important role in achieving the desired performance characteristics.
Motivated by applications in space and mobile robotics, we implement and
evaluate a FPGA-accelerated adaptation of the ELAS algorithm. Despite offering
one of the best trade-offs between efficiency and accuracy, ELAS has only been
shown to run at 1.5-3 fps on a high-end CPU. Our system preserves all
intriguing properties of the original algorithm, such as the slanted plane
priors, but can achieve a frame rate of 47fps whilst consuming under 4W of
power. Unlike previous FPGA based designs, we take advantage of both components
on the CPU/FPGA System-on-Chip to showcase the strategy necessary to accelerate
more complex and computationally diverse algorithms for such low power,
real-time systems.Comment: 8 pages, 7 figures, 2 table
Efficient Neural Network Implementations on Parallel Embedded Platforms Applied to Real-Time Torque-Vectoring Optimization Using Predictions for Multi-Motor Electric Vehicles
The combination of machine learning and heterogeneous embedded platforms enables new potential for developing sophisticated control concepts which are applicable to the field of vehicle dynamics and ADAS. This interdisciplinary work provides enabler solutions -ultimately implementing fast predictions using neural networks (NNs) on field programmable gate arrays (FPGAs) and graphical processing units (GPUs)- while applying them to a challenging application: Torque Vectoring on a multi-electric-motor vehicle for enhanced vehicle dynamics. The foundation motivating this work is provided by discussing multiple domains of the technological context as well as the constraints related to the automotive field, which contrast with the attractiveness of exploiting the capabilities of new embedded platforms to apply advanced control algorithms for complex control problems. In this particular case we target enhanced vehicle dynamics on a multi-motor electric vehicle benefiting from the greater degrees of freedom and controllability offered by such powertrains. Considering the constraints of the application and the implications of the selected multivariable optimization challenge, we propose a NN to provide batch predictions for real-time optimization. This leads to the major contribution of this work: efficient NN implementations on two intrinsically parallel embedded platforms, a GPU and a FPGA, following an analysis of theoretical and practical implications of their different operating paradigms, in order to efficiently harness their computing potential while gaining insight into their peculiarities. The achieved results exceed the expectations and additionally provide a representative illustration of the strengths and weaknesses of each kind of platform. Consequently, having shown the applicability of the proposed solutions, this work contributes valuable enablers also for further developments following similar fundamental principles.Some of the results presented in this work are related to activities within the 3Ccar project, which has
received funding from ECSEL Joint Undertaking under grant agreement No. 662192. This Joint Undertaking
received support from the European Union’s Horizon 2020 research and innovation programme and Germany,
Austria, Czech Republic, Romania, Belgium, United Kingdom, France, Netherlands, Latvia, Finland, Spain, Italy,
Lithuania. This work was also partly supported by the project ENABLES3, which received funding from ECSEL
Joint Undertaking under grant agreement No. 692455-2
An FPGA-based infant monitoring system
We have designed an automated visual surveillance system for monitoring sleeping infants. The low-level image
processing is implemented on an embedded Xilinx’s Virtex
II XC2v6000 FPGA and quantifies the level of scene activity using a specially designed background subtraction algorithm. We present our algorithm and show how we have
optimised it for this platform
Performance analysis of massively parallel embedded hardware architectures for retinal image processing
This paper examines the implementation of a retinal vessel tree extraction technique on different hardware platforms and architectures. Retinal vessel tree extraction is a representative application of those found in the domain of medical image processing. The low signal-to-noise ratio of the images leads to a large amount of low-level tasks in order to meet the accuracy requirements. In some applications, this might compromise computing speed. This paper is focused on the assessment of the performance of a retinal vessel tree extraction method on different hardware platforms. In particular, the retinal vessel tree extraction method is mapped onto a massively parallel SIMD (MP-SIMD) chip, a massively parallel processor array (MPPA) and onto an field-programmable gate arrays (FPGA)This work is funded by Xunta de Galicia under the projects 10PXIB206168PR and 10PXIB206037PR and the program Maria BarbeitoS
Real-time implementation of 3D LiDAR point cloud semantic segmentation in an FPGA
Dissertação de mestrado em Informatics EngineeringIn the last few years, the automotive industry has relied heavily on deep learning applications for
perception solutions. With data-heavy sensors, such as LiDAR, becoming a standard, the task of
developing low-power and real-time applications has become increasingly more challenging. To obtain
the maximum computational efficiency, no longer can one focus solely on the software aspect of such
applications, while disregarding the underlying hardware.
In this thesis, a hardware-software co-design approach is used to implement an inference application
leveraging the SqueezeSegV3, a LiDAR-based convolutional neural network, on the Versal ACAP VCK190
FPGA. Automotive requirements carefully drive the development of the proposed solution, with real-time
performance and low power consumption being the target metrics.
A first experiment validates the suitability of Xilinx’s Vitis-AI tool for the deployment of deep
convolutional neural networks on FPGAs. Both the ResNet-18 and SqueezeNet neural networks are
deployed to the Zynq UltraScale+ MPSoC ZCU104 and Versal ACAP VCK190 FPGAs. The results show
that both networks achieve far more than the real-time requirements while consuming low power.
Compared to an NVIDIA RTX 3090 GPU, the performance per watt during both network’s inference is 12x
and 47.8x higher and 15.1x and 26.6x higher respectively for the Zynq UltraScale+ MPSoC ZCU104 and
the Versal ACAP VCK190 FPGA. These results are obtained with no drop in accuracy in the quantization
step.
A second experiment builds upon the results of the first by deploying a real-time application containing
the SqueezeSegV3 model using the Semantic-KITTI dataset. A framerate of 11 Hz is achieved with a peak
power consumption of 78 Watts. The quantization step results in a minimal accuracy and IoU degradation
of 0.7 and 1.5 points respectively. A smaller version of the same model is also deployed achieving a
framerate of 19 Hz and a peak power consumption of 76 Watts. The application performs semantic
segmentation over all the point cloud with a field of view of 360°.Nos últimos anos a indústria automóvel tem cada vez mais aplicado deep learning para solucionar
problemas de perceção. Dado que os sensores que produzem grandes quantidades de dados, como o
LiDAR, se têm tornado standard, a tarefa de desenvolver aplicações de baixo consumo energético e com
capacidades de reagir em tempo real tem-se tornado cada vez mais desafiante. Para obter a máxima
eficiência computacional, deixou de ser possível focar-se apenas no software aquando do
desenvolvimento de uma aplicação deixando de lado o hardware subjacente.
Nesta tese, uma abordagem de desenvolvimento simultâneo de hardware e software é usada para
implementar uma aplicação de inferência usando o SqueezeSegV3, uma rede neuronal convolucional
profunda, na FPGA Versal ACAP VCK190. São os requisitos automotive que guiam o desenvolvimento da
solução proposta, sendo a performance em tempo real e o baixo consumo energético, as métricas alvo
principais.
Uma primeira experiência valida a aptidão da ferramenta Vitis-AI para a implantação de redes
neuronais convolucionais profundas em FPGAs. As redes ResNet-18 e SqueezeNet são ambas
implantadas nas FPGAs Zynq UltraScale+ MPSoC ZCU104 e Versal ACAP VCK190. Os resultados
mostram que ambas as redes ultrapassam os requisitos de tempo real consumindo pouca energia.
Comparado com a GPU NVIDIA RTX 3090, a performance por Watt durante a inferência de ambas as
redes é superior em 12x e 47.8x e 15.1x e 26.6x respetivamente na Zynq UltraScale+ MPSoC ZCU104
e na Versal ACAP VCK190. Estes resultados foram obtidos sem qualquer perda de accuracy na etapa de
quantização.
Uma segunda experiência é feita no seguimento dos resultados da primeira, implantando uma
aplicação de inferência em tempo real contendo o modelo SqueezeSegV3 e usando o conjunto de dados
Semantic-KITTI. Um framerate de 11 Hz é atingido com um pico de consumo energético de 78 Watts. O
processo de quantização resulta numa perda mínima de accuracy e IoU com valores de 0.7 e 1.5 pontos
respetivamente. Uma versão mais pequena do mesmo modelo é também implantada, atingindo uma
framerate de 19 Hz e um pico de consumo energético de 76 Watts. A aplicação desenvolvida executa
segmentação semântica sobre a totalidade das nuvens de pontos LiDAR, com um campo de visão de
360°
The future of computing beyond Moore's Law.
Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
- …