868 research outputs found
Low-Power Computer Vision: Improve the Efficiency of Artificial Intelligence
Energy efficiency is critical for running computer vision on battery-powered systems, such as mobile phones or UAVs (unmanned aerial vehicles, or drones). This book collects the methods that have won the annual IEEE Low-Power Computer Vision Challenges since 2015. The winners share their solutions and provide insight on how to improve the efficiency of machine learning systems
Recommended from our members
Analog Computing using 1T1R Crossbar Arrays
Memristor is a novel passive electronic device and a promising candidate for new generation non-volatile memory and analog computing. Analog computing based on memristors has been explored in this study. Due to the lack of commercial electrical testing instruments for those emerging devices and crossbar arrays, we have designed and built testing circuits to implement analog and parallel computing operations. With the setup developed in this study, we have successfully demonstrated image processing functions utilizing large memristor crossbar arrays. We further designed and experimentally demonstrated the first memristor based field programmable analog array (FPAA), which was successfully configured for audio equalizer and frequency classifier demonstration as exemplary applications of such memristive FPAA (memFPAA)
Fluidic gates simulated with lattice Boltzmann method under different Reynolds numbers
© 2018 Elsevier B.V. Fluidic devices use fluid as a medium for information transfer and computation. Boolean values are represented by the presence of fluid jets in the input and output channels. Velocity of a fluid is one of the parameters determining Reynolds number of the flow. Reynolds number is a parameter that characterizes the behaviour of the flow: laminar, transient or turbulent. Using lattice Boltzmann method we study the behaviour of fluidic gates for various Reynolds numbers. On the designs of AND and OR gates we show the fluidic gates remain functional even for low Reynolds numbers, like 100. The gates designed can be cascaded into functional logical circuits
Large-Scale Optical Neural Networks based on Photoelectric Multiplication
Recent success in deep neural networks has generated strong interest in
hardware accelerators to improve speed and energy consumption. This paper
presents a new type of photonic accelerator based on coherent detection that is
scalable to large () networks and can be operated at high (GHz)
speeds and very low (sub-aJ) energies per multiply-and-accumulate (MAC), using
the massive spatial multiplexing enabled by standard free-space optical
components. In contrast to previous approaches, both weights and inputs are
optically encoded so that the network can be reprogrammed and trained on the
fly. Simulations of the network using models for digit- and
image-classification reveal a "standard quantum limit" for optical neural
networks, set by photodetector shot noise. This bound, which can be as low as
50 zJ/MAC, suggests performance below the thermodynamic (Landauer) limit for
digital irreversible computation is theoretically possible in this device. The
proposed accelerator can implement both fully-connected and convolutional
networks. We also present a scheme for back-propagation and training that can
be performed in the same hardware. This architecture will enable a new class of
ultra-low-energy processors for deep learning.Comment: Text: 10 pages, 5 figures, 1 table. Supplementary: 8 pages, 5,
figures, 2 table
Integer Echo State Networks: Hyperdimensional Reservoir Computing
We propose an approximation of Echo State Networks (ESN) that can be
efficiently implemented on digital hardware based on the mathematics of
hyperdimensional computing. The reservoir of the proposed Integer Echo State
Network (intESN) is a vector containing only n-bits integers (where n<8 is
normally sufficient for a satisfactory performance). The recurrent matrix
multiplication is replaced with an efficient cyclic shift operation. The intESN
architecture is verified with typical tasks in reservoir computing: memorizing
of a sequence of inputs; classifying time-series; learning dynamic processes.
Such an architecture results in dramatic improvements in memory footprint and
computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl
- …