27 research outputs found
Recommended from our members
MADX: Memristors-As-Drivers for Crossbar logic
Memristors have the potential to not only replace conventional memory, but also to open up new design possibilities because they store 1s and 0s as resistances rather than voltages. A memristor architecture that has attracted interest for its versatility and ease of integration with existing CMOS technologies is the crossbar array. In this paper, I modify the MAD scheme to create the MADX scheme for performing basic logic operations within a crossbar array. Then, I compare this scheme against two of the most well-known schemes, MAGIC and IMPLY. In the case study of a full-adder, both a one-bit and an 8-bit version, the MADX scheme achieves lower latency and substantially lower area requirements than both MAGIC and IMPLY. This is because it is more flexible about storing output values than either, does not destroy input values unlike IMPLY, and has more basic operations. In particular, it has XOR, which neither IMPLY nor MAGIC have and is useful for additionPlan II Honors Progra
Mathematical simulation of memristive for classification in machine learning
Over the last few years, neuromorphic computation has been a widely researched topic. One of the neuromorphic computation elements is the memristor. The memristor is a high density, analogue memory storage, and compliance with Ohm's law for minor potential changes. Memristive behaviour imitates synaptic behaviour. It is a nanotechnology that can reduce power consumption, improve synaptic modeling, and reduce data transmission processes. The purpose of this paper is to investigate a customized mathematical model for machine learning algorithms. This model uses a computing paradigm that differs from standard Von-Neumann architectures, and it has the potential to reduce power consumption and increasing performance while doing specialized jobs when compared to regular computers. Classification is one of the most interesting fields in machine learning to classify features patterns by using a specific algorithm. In this study, a classifier based memristive is used with an adaptive spike encoder for input data. We run this algorithm based on Anti-Hebbian and Hebbian learning rules. These investigations employed two of datasets, including breast cancer Wisconsin and Gaussian mixture model datasets. The results indicate that the performance of our algorithm that has been used based on memristive is reasonably close to the optimal solution
CMOS-based Single-Cycle In-Memory XOR/XNOR
Big data applications are on the rise, and so is the number of data centers.
The ever-increasing massive data pool needs to be periodically backed up in a
secure environment. Moreover, a massive amount of securely backed-up data is
required for training binary convolutional neural networks for image
classification. XOR and XNOR operations are essential for large-scale data copy
verification, encryption, and classification algorithms. The disproportionate
speed of existing compute and memory units makes the von Neumann architecture
inefficient to perform these Boolean operations. Compute-in-memory (CiM) has
proved to be an optimum approach for such bulk computations. The existing
CiM-based XOR/XNOR techniques either require multiple cycles for computing or
add to the complexity of the fabrication process. Here, we propose a CMOS-based
hardware topology for single-cycle in-memory XOR/XNOR operations. Our design
provides at least 2 times improvement in the latency compared with other
existing CMOS-compatible solutions. We verify the proposed system through
circuit/system-level simulations and evaluate its robustness using a 5000-point
Monte Carlo variation analysis. This all-CMOS design paves the way for
practical implementation of CiM XOR/XNOR at scaled technology nodes.Comment: 12 pages, 6 figures, 1 tabl