1,726 research outputs found
Are spin junction transistors suitable for signal processing?
A number of spintronic junction transistors, that exploit the spin degree of
freedom of an electron in addition to the charge degree of freedom, have been
proposed to provide simultaneous non-volatile storage and signal processing
functionality. Here, we show that some of these transistors unfortunately may
not have sufficient voltage and current gains for signal processing. This is
primarily because of a large output ac conductance and poor isolation between
input and output. The latter also hinders unidirectional propagation of logic
signal from the input of a logic gate to the output. Other versions of these
transistors appear to have better gain and isolation, but not better than those
of a conventional transistor. Therefore, these devices may not improve
state-of-the-art signal processing capability, although they may provide
additional functionality by offering non-volatile storage. They may also have
niche applications in non-linear circuits
Phase change materials in non-volatile storage
After revolutionizing the technology of optical data storage, phase change materials are being adopted in non-volatile semiconductor memories. Their success in electronic storage is mostly due to the unique properties of the amorphous state where carrier transport phenomena and thermally-induced phase change cooperate to enable high-speed, low-voltage operation and stable data retention possible within the same material. This paper reviews the key physical properties that make this phase so special, the quantitative framework of cell performance, and the future perspectives of phase-change memory devices at the deep nanoscale
Design and Robustness Analysis on Non-volatile Storage and Logic Circuit
By combining the flexibility of MOS logic and the non-volatility of spintronic devices, spin-MOS logic and storage circuitry offer a promising approach to implement highly integrated, power-efficient, and nonvolatile computing and storage systems. Besides the persistent errors due to process variations, however, the functional correctness of Spin-MOS circuitry suffers from additional non-persistent errors that are incurred by the randomness of spintronic device operations, i.e., thermal fluctuations. This work quantitatively investigates the impact of thermal fluctuations on the operations of two typical Spin-MOS circuitry: one transistor and one magnetic tunnel junction (1T1J) spin-transfer torque random access memory (STT-RAM) cell and a nonvolatile latch design. A new nonvolatile latch design is proposed based on magnetic tunneling junction (MTJ) devices. In the standby mode, the latched data can be retained in the MTJs without consuming any power. Two types of operation errors can occur, namely, persistent and non-persistent errors. These are quantitatively analyzed by including models for process variations and thermal fluctuations during the read and write operations. A mixture importance sampling methodology is applied to enable yield-driven design and extend its application beyond memories to peripheral circuits and logic blocks. Several possible design techniques to reduce thermal induced non-persistent error rate are also discussed
Android Memory Capture and Applications for Security and Privacy
The Android operating system is quickly becoming the most popular platform for mobiledevices. As Android’s use increases, so does the need for both forensic and privacy toolsdesigned for the platform. This thesis presents the first methodology and toolset for acquiringfull physical memory images from Android devices, a proposed methodology for forensicallysecuring both volatile and non-volatile storage, and details of a vulnerability discovered by theauthor that allows the bypass of the Android security model and enables applications to acquirearbitrary permissions
MEMTI: optimizing on-chip non-volatile storage for visual multi-task inference at the edge
The combination of specialized hardware and embedded non-volatile memories (eNVM) holds promise for energy-efficient
DNN inference at the edge. However, integrating DNN hardware accelerators with eNVMs still presents several challenges. Multi-level
programming is desirable for achieving maximal storage density on chip, but the stochastic nature of eNVM writes makes them prone
to errors and further increases the write energy and latency. We present MEMTI, a memory architecture that leverages a multi-task
learning technique for maximal reuse of DNN parameters across multiple visual tasks. We show that by retraining and updating only
10% of all DNN parameters, we can achieve efficient model adaptation across a variety of visual inference tasks. The system
performance is evaluated by integrating the memory with the open-source NVIDIA Deep Learning Architecture (NVDLA)
- …