45 research outputs found
Design and Optimization of Residual Neural Network Accelerators for Low-Power FPGAs Using High-Level Synthesis
Residual neural networks are widely used in computer vision tasks. They
enable the construction of deeper and more accurate models by mitigating the
vanishing gradient problem. Their main innovation is the residual block which
allows the output of one layer to bypass one or more intermediate layers and be
added to the output of a later layer. Their complex structure and the buffering
required by the residual block make them difficult to implement on
resource-constrained platforms. We present a novel design flow for implementing
deep learning models for field programmable gate arrays optimized for ResNets,
using a strategy to reduce their buffering overhead to obtain a
resource-efficient implementation of the residual layer. Our high-level
synthesis (HLS)-based flow encompasses a thorough set of design principles and
optimization strategies, exploiting in novel ways standard techniques such as
temporal reuse and loop merging to efficiently map ResNet models, and
potentially other skip connection-based NN architectures, into FPGA. The models
are quantized to 8-bit integers for both weights and activations, 16-bit for
biases, and 32-bit for accumulations. The experimental results are obtained on
the CIFAR-10 dataset using ResNet8 and ResNet20 implemented with Xilinx FPGAs
using HLS on the Ultra96-V2 and Kria KV260 boards. Compared to the
state-of-the-art on the Kria KV260 board, our ResNet20 implementation achieves
2.88X speedup with 0.5% higher accuracy of 91.3%, while ResNet8 accuracy
improves by 2.8% to 88.7%. The throughputs of ResNet8 and ResNet20 are 12971
FPS and 3254 FPS on the Ultra96 board, and 30153 FPS and 7601 FPS on the Kria
KV26, respectively. They Pareto-dominate state-of-the-art solutions concerning
accuracy, throughput, and energy
On the Reliability of Neural Networks Implemented on SRAM-based FPGAs for Low-cost Satellites
Recent development in the neural network inference frameworks on Field-Programmable Gate Arrays (FPGAs) enables the rapid deployment of neural network applications on low-power FPGA devices. FPGAs are a promising platform for implementing neural network capabilities on board satellites thanks to the high energy efficiency of quantised neural networks on FPGAs. Furthermore, the reconfigurability of FPGAs allows neural network accelerators to share the FPGA with other onboard computer systems for reduced hardware complexity. However, the reliability against radiation-induced upsets of existing neural network inference frameworks on commercial FPGA devices was not previously studied.
The reliability of neural network applications on FPGA is complicated by the perceptrons’ inherent algorithm-based fault tolerance, quantisation techniques, the varying sensitivity of non-neural layers like pooling layers, the architecture of the accelerator, and the software stack. This thesis explores the effect of single event upsets (SEUs) in potential spaceborne FPGA-based neural network applications using fully connected and convolutional networks, on applications using binary, 4-bit and 8-bit quantisation levels, and on applications created from both FINN and Vitis AI frameworks. We study the failure modes in neural network applications caused by SEUs, including loss of accuracy, reduction of throughput/timeout, and catastrophic system failure on FPGA SoC.
We conducted fault injection experiments on fully connected and convolutional neural networks (CNNs) trained for classifying images from the MNIST handwritten digits dataset and the Airbus ship detection dataset. We found that SEUs have an insignificant impact on fully-connected binary networks trained on the MNIST dataset. However, the more complex CNN applications created from the FINN and Vitis-AI frameworks showed much higher sensitivity to SEUs and had more failure modes, including loss of accuracy, hardware hang-up, and even catastrophic failure in the OS of SoC devices due to erroneous driver behaviour. We found that the SEU cross-section of model-specific neural network accelerators like FINN can be reduced significantly by quantising the network to a lower precision. We also studied the efficacy of fault-tolerant design techniques, including full TMR and partial TMR, on the binary neural network and FINN accelerator
Dissent, Posner-Style: Judge Richard A. Posner\u27s First Decade of Dissenting Opinions, 1981-1991 - Toward an Aesthetics of Judicial Dissenting Style
The threefold purpose and structure of this Article is as follows. First, in Part II, before plunging into Judge Posner’s dissenting opinions, I search for a preliminary description of the praxis of modern American dissenting opinion style by drawing upon previous legal scholarship and examples of judicial dissents; this discussion will include an examination of some relevant scholarly writings on opinion style by Judge Rischard A. Posner himself. In Part III, I analyze the published dissenting opinions written by Judge Posner during 1981-1991, evaluating the stylistics of these dissents including his sophisticated use of rhetorical devises. Finally, in Part IV I offer some conclusions about Judge Posner’s early dissenting opinion style, and comment on the implication of my study for understanding the aesthetics of dissenting opinions
A study of the criteria teachers use when selecting learning material
This study investigates the criteria teachers use when selecting and evaluating learning support material, in particular, English second language textbooks. The study seeks to determine what informs the criteria that teachers use for selection. The study is conducted against the backdrop of Curriculum 2005 (C2005) and outlines the C2005 revision process and the subsequent introduction of the Revised National Curriculum Statement (RNCS). Through a series of focus group interviews, the researcher explores the criteria teachers use for evaluation. Many of the teachers in this study did not have clearly articulated criteria; rather, they drew on implicit criteria and mentioned favoured qualities or attributes that they looked for in a textbook. In addition, the teachers in the focus groups used criteria that had been ‘told’ rather than ‘owned’ and had not developed their own sets of criteria. This research concludes that teachers are caught between two conflicting sets of criteria: those of their pre-service training and those of the new curriculum, which is currently being mediated to them through brief orientations. Drawing on recent literature, the researcher argues that in order to shift deep-seated literacy practices, teacher training needs to be prolonged, in-depth and ongoing