1,512 research outputs found
Investigation of LSTM Based Prediction for Dynamic Energy Management in Chip Multiprocessors
In this paper, we investigate the effectiveness of using long short-term memory (LSTM) instead of Kalman filtering to do prediction for the purpose of constructing dynamic energy management (DEM) algorithms in chip multi-processors (CMPs). Either of the two prediction methods is employed to estimate the workload in the next control period for each of the processor cores. These estimates are then used to select voltage-frequency (VF) pairs for each core of the CMP during the next control period as part of a dynamic voltage and frequency scaling (DVFS) technique. The objective of the DVFS technique is to reduce energy consumption under performance constraints that are set by the user. We conduct our investigation using a custom Sniper system simulation framework. Simulation results for 16 and 64 core network-on-chip based CMP architectures and using several benchmarks demonstrate that the LSTM is slightly better than Kalman filtering
Investigation of LSTM Based Prediction for Dynamic Energy Management in Chip Multiprocessors
In this paper, we investigate the effectiveness of using long short-term memory (LSTM) instead of Kalman filtering to do prediction for the purpose of constructing dynamic energy management (DEM) algorithms in chip multi-processors (CMPs). Either of the two prediction methods is employed to estimate the workload in the next control period for each of the processor cores. These estimates are then used to select voltage-frequency (VF) pairs for each core of the CMP during the next control period as part of a dynamic voltage and frequency scaling (DVFS) technique. The objective of the DVFS technique is to reduce energy consumption under performance constraints that are set by the user. We conduct our investigation using a custom Sniper system simulation framework. Simulation results for 16 and 64 core network-on-chip based CMP architectures and using several benchmarks demonstrate that the LSTM is slightly better than Kalman filtering
Automatic Environmental Sound Recognition: Performance versus Computational Cost
In the context of the Internet of Things (IoT), sound sensing applications
are required to run on embedded platforms where notions of product pricing and
form factor impose hard constraints on the available computing power. Whereas
Automatic Environmental Sound Recognition (AESR) algorithms are most often
developed with limited consideration for computational cost, this article seeks
which AESR algorithm can make the most of a limited amount of computing power
by comparing the sound classification performance em as a function of its
computational cost. Results suggest that Deep Neural Networks yield the best
ratio of sound classification accuracy across a range of computational costs,
while Gaussian Mixture Models offer a reasonable accuracy at a consistently
small cost, and Support Vector Machines stand between both in terms of
compromise between accuracy and computational cost
Deep Learning for Multiscale Damage Analysis via Physics-Informed Recurrent Neural Network
Direct numerical simulation of hierarchical materials via
homogenization-based concurrent multiscale models poses critical challenges for
3D large scale engineering applications, as the computation of highly nonlinear
and path-dependent material constitutive responses at the lower scale causes
prohibitively high computational costs. In this work, we propose a
physics-informed data-driven deep learning model as an efficient surrogate to
emulate the effective responses of heterogeneous microstructures under
irreversible elasto-plastic hardening and softening deformation. Our
contribution contains several major innovations. First, we propose a novel
training scheme to generate arbitrary loading sequences in the sampling space
confined by deformation constraints where the simulation cost of homogenizing
microstructural responses per sequence is dramatically reduced via mechanistic
reduced-order models. Second, we develop a new sequential learner that
incorporates thermodynamics consistent physics constraints by customizing
training loss function and data flow architecture. We additionally demonstrate
the integration of trained surrogate within the framework of classic multiscale
finite element solver. Our numerical experiments indicate that our model shows
a significant accuracy improvement over pure data-driven emulator and a
dramatic efficiency boost than reduced models. We believe our data-driven model
provides a computationally efficient and mechanics consistent alternative for
classic constitutive laws beneficial for potential high-throughput simulations
that needs material homogenization of irreversible behaviors
VGM-RNN: Recurrent Neural Networks for Video Game Music Generation
The recent explosion of interest in deep neural networks has affected and in some cases reinvigorated work in fields as diverse as natural language processing, image recognition, speech recognition and many more. For sequence learning tasks, recurrent neural networks and in particular LSTM-based networks have shown promising results. Recently there has been interest – for example in the research by Google’s Magenta team – in applying so-called “language modeling” recurrent neural networks to musical tasks, including for the automatic generation of original music. In this work we demonstrate our own LSTM-based music language modeling recurrent network. We show that it is able to learn musical features from a MIDI dataset and generate output that is musically interesting while demonstrating features of melody, harmony and rhythm. We source our dataset from VGMusic.com, a collection of user-submitted MIDI transcriptions of video game songs, and attempt to generate output which emulates this kind of music
- …