29,320 research outputs found
Open the box of digital neuromorphic processor: Towards effective algorithm-hardware co-design
Sparse and event-driven spiking neural network (SNN) algorithms are the ideal
candidate solution for energy-efficient edge computing. Yet, with the growing
complexity of SNN algorithms, it isn't easy to properly benchmark and optimize
their computational cost without hardware in the loop. Although digital
neuromorphic processors have been widely adopted to benchmark SNN algorithms,
their black-box nature is problematic for algorithm-hardware co-optimization.
In this work, we open the black box of the digital neuromorphic processor for
algorithm designers by presenting the neuron processing instruction set and
detailed energy consumption of the SENeCA neuromorphic architecture. For
convenient benchmarking and optimization, we provide the energy cost of the
essential neuromorphic components in SENeCA, including neuron models and
learning rules. Moreover, we exploit the SENeCA's hierarchical memory and
exhibit an advantage over existing neuromorphic processors. We show the energy
efficiency of SNN algorithms for video processing and online learning, and
demonstrate the potential of our work for optimizing algorithm designs.
Overall, we present a practical approach to enable algorithm designers to
accurately benchmark SNN algorithms and pave the way towards effective
algorithm-hardware co-design
To develop an efficient variable speed compressor motor system
This research presents a proposed new method of improving the energy efficiency of a Variable Speed Drive (VSD) for induction motors. The principles of VSD are reviewed with emphasis on the efficiency and power losses associated with the operation of the variable speed compressor motor drive, particularly at low speed operation.The efficiency of induction motor when operated at rated speed and load torque
is high. However at low load operation, application of the induction motor at rated flux will cause the iron losses to increase excessively, hence its efficiency will reduce
dramatically. To improve this efficiency, it is essential to obtain the flux level that minimizes the total motor losses. This technique is known as an efficiency or energy
optimization control method. In practice, typical of the compressor load does not require high dynamic response, therefore improvement of the efficiency optimization
control that is proposed in this research is based on scalar control model.In this research, development of a new neural network controller for efficiency optimization control is proposed. The controller is designed to generate both voltage and frequency reference signals imultaneously. To achieve a robust controller from variation of motor parameters, a real-time or on-line learning algorithm based on a second order optimization Levenberg-Marquardt is employed. The simulation of the proposed controller for variable speed compressor is presented. The results obtained
clearly show that the efficiency at low speed is significant increased. Besides that the speed of the motor can be maintained. Furthermore, the controller is also robust to the motor parameters variation. The simulation results are also verified by experiment
A modular T-mode design approach for analog neural network hardware implementations
A modular transconductance-mode (T-mode) design approach is presented for analog hardware implementations of neural networks. This design approach is used to build a modular bidirectional associative memory network. The authors show that the size of the whole system can be increased by interconnecting more modular chips. It is also shown that by changing the interconnection strategy different neural network systems can be implemented, such as a Hopfield network, a winner-take-all network, a simplified ART1 network, or a constrained optimization network. Experimentally measured results from CMOS 2-μm double-metal, double-polysilicon prototypes (MOSIS) are presented
Analog Neural Programmable Optimizers in CMOS VLSI Technologies
A 3-μm CMOS IC is presented demonstrating the concept of an analog neural system for constrained optimization. A serial time-multiplexed general-purpose architecture is introduced for the real-time solution of this kind of problem in MOS VLSI. This architecture is a fully programmable and reconfigurable one exploiting SC techniques for the analog part and making extensive use of digital techniques for programmability
Recommended from our members
Versatile stochastic dot product circuits based on nonvolatile memories for high performance neurocomputing and neurooptimization.
The key operation in stochastic neural networks, which have become the state-of-the-art approach for solving problems in machine learning, information theory, and statistics, is a stochastic dot-product. While there have been many demonstrations of dot-product circuits and, separately, of stochastic neurons, the efficient hardware implementation combining both functionalities is still missing. Here we report compact, fast, energy-efficient, and scalable stochastic dot-product circuits based on either passively integrated metal-oxide memristors or embedded floating-gate memories. The circuit's high performance is due to mixed-signal implementation, while the efficient stochastic operation is achieved by utilizing circuit's noise, intrinsic and/or extrinsic to the memory cell array. The dynamic scaling of weights, enabled by analog memory devices, allows for efficient realization of different annealing approaches to improve functionality. The proposed approach is experimentally verified for two representative applications, namely by implementing neural network for solving a four-node graph-partitioning problem, and a Boltzmann machine with 10-input and 8-hidden neurons
- …