3,879,749 research outputs found

    On the Witten Rigidity Theorem for Stringc^c Manifolds

    Full text link
    We establish the family rigidity and vanishing theorems on the equivariant KK-theory level for the Witten type operators on Stringc^c manifolds introduced by Chen-Han-Zhang.Comment: arXiv admin note: substantial text overlap with arXiv:1104.3972, and with arXiv:math/0001014, arXiv:math/9912108 by other author

    Massive Domain Wall Fermions on Four-dimensional Anisotropic Lattices

    Get PDF
    We formulate the massive domain wall fermions on anisotropic lattices. For the massive domain wall fermion, we find that the dispersion relation assumes the usual form in the low momentum region when the bare parameters are properly tuned. The quark self-energy and the quark field renormalization constants are calculated to one-loop in bare lattice perturbation theory. For light domain wall fermions, we verified that the chiral mode is stable against quantum fluctuations on anisotropic lattices. This calculation serves as a guidance for the tuning of the parameters in the quark action in future numerical simulations.Comment: 36 pages, 14 figures, references adde

    Towards Accurate and High-Speed Spiking Neuromorphic Systems with Data Quantization-Aware Deep Networks

    Full text link
    Deep Neural Networks (DNNs) have gained immense success in cognitive applications and greatly pushed today's artificial intelligence forward. The biggest challenge in executing DNNs is their extremely data-extensive computations. The computing efficiency in speed and energy is constrained when traditional computing platforms are employed in such computational hungry executions. Spiking neuromorphic computing (SNC) has been widely investigated in deep networks implementation own to their high efficiency in computation and communication. However, weights and signals of DNNs are required to be quantized when deploying the DNNs on the SNC, which results in unacceptable accuracy loss. %However, the system accuracy is limited by quantizing data directly in deep networks deployment. Previous works mainly focus on weights discretize while inter-layer signals are mainly neglected. In this work, we propose to represent DNNs with fixed integer inter-layer signals and fixed-point weights while holding good accuracy. We implement the proposed DNNs on the memristor-based SNC system as a deployment example. With 4-bit data representation, our results show that the accuracy loss can be controlled within 0.02% (2.3%) on MNIST (CIFAR-10). Compared with the 8-bit dynamic fixed-point DNNs, our system can achieve more than 9.8x speedup, 89.1% energy saving, and 30% area saving.Comment: 6 pages, 4 figure
    corecore