27 research outputs found

    Mathematical simulation of memristive for classification in machine learning

    Get PDF
    Over the last few years, neuromorphic computation has been a widely researched topic. One of the neuromorphic computation elements is the memristor. The memristor is a high density, analogue memory storage, and compliance with Ohm's law for minor potential changes. Memristive behaviour imitates synaptic behaviour. It is a nanotechnology that can reduce power consumption, improve synaptic modeling, and reduce data transmission processes. The purpose of this paper is to investigate a customized mathematical model for machine learning algorithms. This model uses a computing paradigm that differs from standard Von-Neumann architectures, and it has the potential to reduce power consumption and increasing performance while doing specialized jobs when compared to regular computers. Classification is one of the most interesting fields in machine learning to classify features patterns by using a specific algorithm. In this study, a classifier based memristive is used with an adaptive spike encoder for input data. We run this algorithm based on Anti-Hebbian and Hebbian learning rules. These investigations employed two of datasets, including breast cancer Wisconsin and Gaussian mixture model datasets. The results indicate that the performance of our algorithm that has been used based on memristive is reasonably close to the optimal solution

    CMOS-based Single-Cycle In-Memory XOR/XNOR

    Full text link
    Big data applications are on the rise, and so is the number of data centers. The ever-increasing massive data pool needs to be periodically backed up in a secure environment. Moreover, a massive amount of securely backed-up data is required for training binary convolutional neural networks for image classification. XOR and XNOR operations are essential for large-scale data copy verification, encryption, and classification algorithms. The disproportionate speed of existing compute and memory units makes the von Neumann architecture inefficient to perform these Boolean operations. Compute-in-memory (CiM) has proved to be an optimum approach for such bulk computations. The existing CiM-based XOR/XNOR techniques either require multiple cycles for computing or add to the complexity of the fabrication process. Here, we propose a CMOS-based hardware topology for single-cycle in-memory XOR/XNOR operations. Our design provides at least 2 times improvement in the latency compared with other existing CMOS-compatible solutions. We verify the proposed system through circuit/system-level simulations and evaluate its robustness using a 5000-point Monte Carlo variation analysis. This all-CMOS design paves the way for practical implementation of CiM XOR/XNOR at scaled technology nodes.Comment: 12 pages, 6 figures, 1 tabl
    corecore