6,028 research outputs found
Fast Neural Network Predictions from Constrained Aerodynamics Datasets
Incorporating computational fluid dynamics in the design process of jets,
spacecraft, or gas turbine engines is often challenged by the required
computational resources and simulation time, which depend on the chosen
physics-based computational models and grid resolutions. An ongoing problem in
the field is how to simulate these systems faster but with sufficient accuracy.
While many approaches involve simplified models of the underlying physics,
others are model-free and make predictions based only on existing simulation
data. We present a novel model-free approach in which we reformulate the
simulation problem to effectively increase the size of constrained pre-computed
datasets and introduce a novel neural network architecture (called a cluster
network) with an inductive bias well-suited to highly nonlinear computational
fluid dynamics solutions. Compared to the state-of-the-art in model-based
approximations, we show that our approach is nearly as accurate, an order of
magnitude faster, and easier to apply. Furthermore, we show that our method
outperforms other model-free approaches
On the Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks
Large-scale deep neural networks are both memory intensive and
computation-intensive, thereby posing stringent requirements on the computing
platforms. Hardware accelerations of deep neural networks have been extensively
investigated in both industry and academia. Specific forms of binary neural
networks (BNNs) and stochastic computing based neural networks (SCNNs) are
particularly appealing to hardware implementations since they can be
implemented almost entirely with binary operations. Despite the obvious
advantages in hardware implementation, these approximate computing techniques
are questioned by researchers in terms of accuracy and universal applicability.
Also it is important to understand the relative pros and cons of SCNNs and BNNs
in theory and in actual hardware implementations. In order to address these
concerns, in this paper we prove that the "ideal" SCNNs and BNNs satisfy the
universal approximation property with probability 1 (due to the stochastic
behavior). The proof is conducted by first proving the property for SCNNs from
the strong law of large numbers, and then using SCNNs as a "bridge" to prove
for BNNs. Based on the universal approximation property, we further prove that
SCNNs and BNNs exhibit the same energy complexity. In other words, they have
the same asymptotic energy consumption with the growing of network size. We
also provide a detailed analysis of the pros and cons of SCNNs and BNNs for
hardware implementations and conclude that SCNNs are more suitable for
hardware.Comment: 9 pages, 3 figure
- …