28 research outputs found
Adaptive Discontinuous Galerkin Finite Element Methods
The Discontinuous Galerkin Method is one variant of the Finite Element Methods for solving partial differential equations, which was first introduced by Reed and Hill in 1970’s [27]. Discontinuous GalerkinMethod (DGFEM) differs from the standard Galerkin FEMthat continuity constraints are not imposed on the inter-element boundaries. It results in a solution which is composed of totally piecewise discontinuous functions. The absence of continuity constraints on the inter-element boundaries implies that DG method has a great deal of flexibility at the cost of increasing the number of degrees of freedom. This flexibility is the source of many but not all of the advantages of the DGFEM method over the Continuous Galerkin (CGFEM) method that uses spaces of continuous piecewise polynomial functions and other ”less standard” methods such as nonconforming methods. As DGFEM method leads to bigger system to solve, theoretical and practical approaches to speed it up are our main focus in this dissertation. This research aims at designing and building an adaptive discontinuous Galerkin finite element method to solve partial differential equations with fast time for desired accuracy on modern architecture
Criticality-Guided Efficient Pruning in Spiking Neural Networks Inspired by Critical Brain Hypothesis
Spiking Neural Networks (SNNs) have gained considerable attention due to the
energy-efficient and multiplication-free characteristics. The continuous growth
in scale of deep SNNs poses challenges for model deployment. Network pruning
reduces hardware resource requirements of model deployment by compressing the
network scale. However, existing SNN pruning methods cause high pruning costs
and performance loss because the pruning iterations amplify the training
difficulty of SNNs. In this paper, inspired by the critical brain hypothesis in
neuroscience, we propose a regeneration mechanism based on the neuron
criticality for SNN pruning to enhance feature extraction and accelerate the
pruning process. Firstly, we propose a low-cost metric for the criticality in
SNNs. Then, we re-rank the pruned structures after pruning and regenerate those
with higher criticality to obtain the critical network. Our method achieves
higher performance than the current state-of-the-art (SOTA) method with up to
95.26% reduction of pruning cost. Moreover, we investigate the underlying
mechanism of our method and find that it efficiently selects potential
structures and learns the consistent feature representation