14 research outputs found

    Memristive crossbars as hardware accelerators: modelling, design and new uses

    Get PDF
    Digital electronics has given rise to reliable, affordable, and scalable computing devices. However, new computing paradigms present challenges. For example, machine learning requires repeatedly processing large amounts of data; this creates a bottleneck in conventional computers, where computing and memory are separated. To add to that, Moore’s “law” is plateauing and is thus unlikely to address the increasing demand for computational power. In-memory computing, and specifically hardware accelerators for linear algebra, may address both of these issues. Memristive crossbar arrays are a promising candidate for such hardware accelerators. Memristive devices are fast, energy-efficient, and—when arranged in a crossbar structure—can compute vector-matrix products. Unfortunately, they come with their own set of limitations. The analogue nature of these devices makes them stochastic and thus less reliable compared to digital devices. It does not, however, necessarily make them unsuitable for computing. Nevertheless, successful deployment of analogue hardware accelerators requires a proper understanding of their drawbacks, ways of mitigating the effects of undesired physical behaviour, and applications where some degree of stochasticity is tolerable. In this thesis, I investigate the effects of nonidealities in memristive crossbar arrays, introduce techniques of minimising those negative effects, and present novel crossbar circuit designs for new applications. I mostly focus on physical implementations of neural networks and investigate the influence of device nonidealities on classification accuracy. To make memristive neural networks more reliable, I explore committee machines, rearrangement of crossbar lines, nonideality-aware training, and other techniques. I find that they all may contribute to the higher accuracy of physically implemented neural networks, often comparable to the accuracy of their digital counterparts. Finally, I introduce circuits that extend dot product computations to higher-rank arrays, different linear algebra operations, and quaternion vectors and matrices. These present opportunities for using crossbar arrays in new ways, including the processing of coloured images

    IR-QNN Framework: An IR Drop-Aware Offline Training Of Quantized Crossbar Arrays

    Get PDF
    Resistive Crossbar Arrays present an elegant implementation solution for Deep Neural Networks acceleration. The Matrix-Vector Multiplication, which is the corner-stone of DNNs, is carried out in O(1) compared to O(N-2) steps for digital realizations of O(log(2)(N)) steps for in-memory associative processors. However, the IR drop problem, caused by the inevitable interconnect wire resistance in RCAs remains a daunting challenge. In this article, we propose a fast and efficient training and validation framework to incorporate the wire resistance in Quantized DNNs, without the need for computationally extensive SPICE simulations during the training process. A fabricated four-bit Au/Al2O3/HfO2/TiN device is modelled and used within the framework with two-mapping schemes to realize the quantized weights. Efficient system-level IR-drop estimation methods are used to accelerate training. SPICE validation results show the effectiveness of the proposed method to capture the IR drop problem achieving the baseline accuracy with a 2% and 4% drop in the worst-case scenario for MNIST dataset on multilayer perceptron network and CIFAR 10 dataset on modified VGG and AlexNet networks, respectively. Other nonidealities, such as stuck-at fault defects, variability, and aging, are studied. Finally, the design considerations of the neuronal and the driver circuits are discussed
    corecore