7 research outputs found
Recommended from our members
Neural Network Reduction for Efficient Execution on Edge Devices
As the size of neural networks increase, the resources needed to support their execution also increase. This presents a barrier for creating neural networks that can be trained and executed within resource limited embedded systems. To reduce the resources needed to execute neural networks, weight reduction is often the first target. A network that has been significantly pruned can be executed on-chip, that is, in low SWaP hardware. But, this does not enable either training or pruning in embedded hardware which first requires a full-sized network to fit within the restricted resources. We introduce two methods of network reduction that allows neural networks to be grown and trained within edge devices, Artificial Neurogenesis and Synaptic Input Consolidation
Recommended from our members
Balancing the learning ability and memory demand of a perceptron-based dynamically trainable neural network
Artificial neural networks (ANNs) have become a popular means of solving complex problems in prediction-based applications such as image and natural language processing. Two challenges prominent in the neural network domain are the practicality of hardware implementation and dynamically training the network. In this study, we address these challenges with a development methodology that balances the hardware footprint and the quality of the ANN. We use the well-known perceptron-based branch prediction problem as a case study for demonstrating this methodology. This problem is perfect to analyze dynamic hardware implementations of ANNs because it exists in hardware and trains dynamically. Using our hierarchical configuration search space exploration, we show that we can decrease the memory footprint of a standard perceptron-based branch predictor by 2.3 with only a 0.6% decrease in prediction accuracy.Raytheon Missile Systems [2017-UNI-0008]12 month embargo; published online: 16 April 2018This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Balancing the learning ability and memory demand of a perceptron-based dynamically trainable neural network
Artificial neural networks (ANNs) have become a popular means of solving complex problems in prediction-based applications such as image and natural language processing. Two challenges prominent in the neural network domain are the practicality of hardware implementation and dynamically training the network. In this study, we address these challenges with a development methodology that balances the hardware footprint and the quality of the ANN. We use the well-known perceptron-based branch prediction problem as a case study for demonstrating this methodology. This problem is perfect to analyze dynamic hardware implementations of ANNs because it exists in hardware and trains dynamically. Using our hierarchical configuration search space exploration, we show that we can decrease the memory footprint of a standard perceptron-based branch predictor by 2.3 with only a 0.6% decrease in prediction accuracy.Raytheon Missile Systems [2017-UNI-0008]12 month embargo; published online: 16 April 2018This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Observing Shocks
Macroeconomists have observed business cycle fluctuations over time by constructing and manipulating models in which shocks have increasingly played a greater role. Shock is a term of art that pervades modern economics appearing in nearly one-quarter of all journal articles in economics and in nearly half in macroeconomics. Surprisingly, its rise as an essential element in the vocabulary of economists can be dated only to the early 1970s. We trace the history of shocks in macroeconomics from Ragnar Frisch and Eugen Slutsky in the 1920s and 1930s through real business cycle and DSGE models and to the use of shocks as generators of impulse-response functions, which are in turn used as data in matching estimators. The history is organized around the observability of shocks. As well as documenting a critical conceptual development in economics, the history of shocks shows that James Bogen and James Woodward’s distinction between data and phenomena must be substantially relativized if it is to be at all plausible