13 research outputs found

    Spiking PointNet: Spiking Neural Networks for Point Clouds

    Full text link
    Recently, Spiking Neural Networks (SNNs), enjoying extreme energy efficiency, have drawn much research attention on 2D visual recognition and shown gradually increasing application potential. However, it still remains underexplored whether SNNs can be generalized to 3D recognition. To this end, we present Spiking PointNet in the paper, the first spiking neural model for efficient deep learning on point clouds. We discover that the two huge obstacles limiting the application of SNNs in point clouds are: the intrinsic optimization obstacle of SNNs that impedes the training of a big spiking model with large time steps, and the expensive memory and computation cost of PointNet that makes training a big spiking point model unrealistic. To solve the problems simultaneously, we present a trained-less but learning-more paradigm for Spiking PointNet with theoretical justifications and in-depth experimental analysis. In specific, our Spiking PointNet is trained with only a single time step but can obtain better performance with multiple time steps inference, compared to the one trained directly with multiple time steps. We conduct various experiments on ModelNet10, ModelNet40 to demonstrate the effectiveness of Spiking PointNet. Notably, our Spiking PointNet even can outperform its ANN counterpart, which is rare in the SNN field thus providing a potential research direction for the following work. Moreover, Spiking PointNet shows impressive speedup and storage saving in the training phase.Comment: Accepted by NeurIP

    RMP-Loss: Regularizing Membrane Potential Distribution for Spiking Neural Networks

    Full text link
    Spiking Neural Networks (SNNs) as one of the biology-inspired models have received much attention recently. It can significantly reduce energy consumption since they quantize the real-valued membrane potentials to 0/1 spikes to transmit information thus the multiplications of activations and weights can be replaced by additions when implemented on hardware. However, this quantization mechanism will inevitably introduce quantization error, thus causing catastrophic information loss. To address the quantization error problem, we propose a regularizing membrane potential loss (RMP-Loss) to adjust the distribution which is directly related to quantization error to a range close to the spikes. Our method is extremely simple to implement and straightforward to train an SNN. Furthermore, it is shown to consistently outperform previous state-of-the-art methods over different network architectures and datasets.Comment: Accepted by ICCV202

    Membrane Potential Batch Normalization for Spiking Neural Networks

    Full text link
    As one of the energy-efficient alternatives of conventional neural networks (CNNs), spiking neural networks (SNNs) have gained more and more interest recently. To train the deep models, some effective batch normalization (BN) techniques are proposed in SNNs. All these BNs are suggested to be used after the convolution layer as usually doing in CNNs. However, the spiking neuron is much more complex with the spatio-temporal dynamics. The regulated data flow after the BN layer will be disturbed again by the membrane potential updating operation before the firing function, i.e., the nonlinear activation. Therefore, we advocate adding another BN layer before the firing function to normalize the membrane potential again, called MPBN. To eliminate the induced time cost of MPBN, we also propose a training-inference-decoupled re-parameterization technique to fold the trained MPBN into the firing threshold. With the re-parameterization technique, the MPBN will not introduce any extra time burden in the inference. Furthermore, the MPBN can also adopt the element-wised form, while these BNs after the convolution layer can only use the channel-wised form. Experimental results show that the proposed MPBN performs well on both popular non-spiking static and neuromorphic datasets. Our code is open-sourced at \href{https://github.com/yfguo91/MPBN}{MPBN}.Comment: Accepted by ICCV202

    Preparation of PbCO 3

    No full text

    Source Analysis and Contamination Assessment of Potentially Toxic Element in Soil of Small Watershed in Mountainous Area of Southern Henan, China

    No full text
    In this study, the concentrations of potentially toxic elements in 283 topsoil samples were determined. Håkanson toxicity response coefficient modified matter element extension model was introduced to evaluate the soil elements contamination, and the results were compared with the pollution index method. The sources and spatial distribution of soil elements were analyzed by the combination of the PMF model and IDW interpolation. The results are as follows, 1: The concentration distribution of potentially toxic elements is different in space. Higher concentrations were found in the vicinity of the mining area and farmland. 2: The weight of all elements has changed significantly. The evaluation result of the matter-element extension model shows that 68.55% of the topsoil in the study area is clean soil, and Hg is the main contamination element. The evaluation result is roughly the same as that of the pollution index method, indicating that the evaluation result of the matter-element extension model with modified is accurate and reasonable. 3: Potentially toxic elements mainly come from the mixed sources of atmospheric sedimentation and agricultural activities (22.59%), the mixed sources of agricultural activities and mining (20.26%), the mixed sources of traffic activities, nature and mining (36.30%), the mixed sources of pesticide use and soil parent material (20.85%)

    Reducing Information Loss for Spiking Neural Networks

    Full text link
    The Spiking Neural Network (SNN) has attracted more and more attention recently. It adopts binary spike signals to transmit information. Benefitting from the information passing paradigm of SNNs, the multiplications of activations and weights can be replaced by additions, which are more energy-efficient. However, its ``Hard Reset" mechanism for the firing activity would ignore the difference among membrane potentials when the membrane potential is above the firing threshold, causing information loss. Meanwhile, quantifying the membrane potential to 0/1 spikes at the firing instants will inevitably introduce the quantization error thus bringing about information loss too. To address these problems, we propose to use the ``Soft Reset" mechanism for the supervised training-based SNNs, which will drive the membrane potential to a dynamic reset potential according to its magnitude, and Membrane Potential Rectifier (MPR) to reduce the quantization error via redistributing the membrane potential to a range close to the spikes. Results show that the SNNs with the ``Soft Reset" mechanism and MPR outperform their vanilla counterparts on both static and dynamic datasets.Comment: Accepted by ECCV202

    Preparation and Self-Healing Application of Isocyanate Prepolymer Microcapsules

    No full text
    In this study, we successfully manufactured polyurethane microcapsules containing isocyanate prepolymer as a core material for self-healing protection coatings via interfacial polymerization of a commercial polyurethane curing agent (Bayer L-75) and 1,4-butanediol (BDO) as a chain extender in an emulsion solution. With an optical microscope (OM) and a scanning electron microscope (SEM), the resulting microcapsules showed a spherical shape and an ideal structure with a smooth surface. Fourier transform infrared spectra (FTIR) showed that the core material was successfully encapsulated. Thermal gravimetric analysis (TGA) showed that the initial evaporation temperature of the microcapsules was 270 °C. In addition, we examined the influence of the concentration of the emulsifier and chain extender on the structure and morphology of the microcapsules. The results indicate that the optimal parameters of the microcapsule are an emulsifier concentration of 7.5% and a chain extender concentration of 15.38%. Microcapsules were added to the epoxy resin coating to verify the coating’s self-healing performance by a surface scratch test, and the results showed that the cracks could heal in 24 h. Furthermore, the self-healing coating had excellent corrosion resistance
    corecore