48 research outputs found

    Assigning conservation value and identifying hotspots of endemic rattan diversity in the Western Ghats, India

    Get PDF
    Rattans, or canes, are one of the most important non-timber forest products supporting the livelihood of many forest-dwelling communities in South and North-eastern India. Due to increased demand for rattan products, rattans have been extracted indiscriminately from the Western Ghats, a 1600-km mountain chain running parallel to the west coast of India. Extensive harvesting, loss of habitat and poor regeneration has resulted in dwindling rattan populations, necessitating an urgent attempt to conserve existing rattan resources. In this study, using niche-modelling tools, an attempt has been made to identify areas of high species richness of rattans in the Western Ghats, one of the mega-diversity regions of the world. We have also developed conservation values for 21 economically important and endemic rattans of the Western Ghats. We identified at least two to three sites of extremely high species richness outside the existing protected area network that should be prioritized for in situ conservation. This study emphasizes the need to develop strategies for the long-term conservation of rattans in the Western Ghats, Indi

    ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

    Full text link
    Deep neural networks (DNNs) have surpassed human-level accuracy in a variety of cognitive tasks but at the cost of significant memory/time requirements in DNN training. This limits their deployment in energy and memory limited applications that require real-time learning. Matrix-vector multiplications (MVM) and vector-vector outer product (VVOP) are the two most expensive operations associated with the training of DNNs. Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy. However, the VVOP computation remains a relatively less explored bottleneck even with the aforementioned strategies. Stochastic computing (SC) has been proposed to improve the efficiency of VVOP computation but on relatively shallow networks with bounded activation functions and floating-point (FP) scaling of activation gradients. In this paper, we propose ESSOP, an efficient and scalable stochastic outer product architecture based on the SC paradigm. We introduce efficient techniques to generalize SC for weight update computation in DNNs with the unbounded activation functions (e.g., ReLU), required by many state-of-the-art networks. Our architecture reduces the computational cost by re-using random numbers and replacing certain FP multiplication operations by bit shift scaling. We show that the ResNet-32 network with 33 convolution layers and a fully-connected layer can be trained with ESSOP on the CIFAR-10 dataset to achieve baseline comparable accuracy. Hardware design of ESSOP at 14nm technology node shows that, compared to a highly pipelined FP16 multiplier design, ESSOP is 82.2% and 93.7% better in energy and area efficiency respectively for outer product computation.Comment: 5 pages. 5 figures. Accepted at ISCAS 2020 for publicatio

    Accurate deep neural network inference using computational phase-change memory

    Get PDF
    In-memory computing is a promising non-von Neumann approach for making energy-efficient deep learning inference hardware. Crossbar arrays of resistive memory devices can be used to encode the network weights and perform efficient analog matrix-vector multiplications without intermediate movements of data. However, due to device variability and noise, the network needs to be trained in a specific way so that transferring the digitally trained weights to the analog resistive memory devices will not result in significant loss of accuracy. Here, we introduce a methodology to train ResNet-type convolutional neural networks that results in no appreciable accuracy loss when transferring weights to in-memory computing hardware based on phase-change memory (PCM). We also propose a compensation technique that exploits the batch normalization parameters to improve the accuracy retention over time. We achieve a classification accuracy of 93.7% on the CIFAR-10 dataset and a top-1 accuracy on the ImageNet benchmark of 71.6% after mapping the trained weights to PCM. Our hardware results on CIFAR-10 with ResNet-32 demonstrate an accuracy above 93.5% retained over a one day period, where each of the 361,722 synaptic weights of the network is programmed on just two PCM devices organized in a differential configuration.Comment: This is a pre-print of an article accepted for publication in Nature Communication

    Mixed-precision deep learning based on computational memory

    Full text link
    Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 173x improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation

    Theoretical insights into kesterite and stannite phases of Cu-2(Sn1-xGex)ZnSe4 based alloys: A prospective photovoltaic material

    Get PDF
    A comparative study of kesterite (KS) and stannite (ST) phases of Cu-2(Sn1-xGex)ZnSe4 (CTGZSe) alloys has been carried out using a hybrid functional within the framework of density functional theory (DFT). Our calculations suggest that KS phase is energetically more stable. We find that the total energy of the KS phase decreases with increasing concentration (x) of Ge. The calculated positive binding energies suggest that the alloy systems are stable. The formation enthalpy clearly indicates that CTGZSe alloys are thermodynamically stable and its growth can be achieved by following the route of an exothermic reaction. The calculated energy band gaps of the alloys agree well with the experimental data for the KS phase. The band offsets of KS and ST phases as a function of Ge concentration (x) can be explained on the basis of the calculated energy band gaps. We find a slight upshift in the conduction band edges while the valence band edges remain almost the same on varying the concentration (x) of Ge. Our results could be useful for the development of CTGZSe alloys based solar cells
    corecore