5 research outputs found
Design and Implementation of Area and Power Efficient Low Power VLSI Circuits through Simple Byte Compression with Encoding Technique
Transition activity is one of the major factors for power dissipation in Low power VLSI circuits due to charging and discharging of internal node capacitances. Power dissipation is reduced through minimizing the transition activity by using proper coding techniques. In this paper Multi coding technique is implemented to reduce the transition activity up to 58.26%. Speed of data transmission basically depends on the number of bits transmitted through bus. When handling data for large applications huge storage space is required for processing, storing and transferring information. Data compression is an algorithm to reduce the number of bits required to represent information in a compact form. Here simple byte compression technique is implemented to achieve a lossless data compression. This compression algorithm also reduces the encoder computational complexity when handling huge bits of information. Simple byte compression technique improves the compression ratio up to 62.5%. As a cumulative effort of Simple byte compression with Multi coding techniques minimize area and power dissipation in low power VLSI circuits
Continued study of NAVSTAR/GPS for general aviation
A conceptual approach for examining the full potential of Global Positioning Systems (GPS) for the general aviation community is presented. Aspects of an experimental program to demonstrate these concepts are discussed. The report concludes with the observation that the true potential of GPS can only be exploited by utilization in concert with a data link. The capability afforded by the combination of position location and reporting stimulates the concept of GPS providing the auxiliary functions of collision avoidance, and approach and landing guidance. A series of general recommendations for future NASA and civil community efforts in order to continue to support GPS for general aviation are included
RePAST: A ReRAM-based PIM Accelerator for Second-order Training of DNN
The second-order training methods can converge much faster than first-order
optimizers in DNN training. This is because the second-order training utilizes
the inversion of the second-order information (SOI) matrix to find a more
accurate descent direction and step size. However, the huge SOI matrices bring
significant computational and memory overheads in the traditional architectures
like GPU and CPU. On the other side, the ReRAM-based process-in-memory (PIM)
technology is suitable for the second-order training because of the following
three reasons: First, PIM's computation happens in memory, which reduces data
movement overheads; Second, ReRAM crossbars can compute SOI's inversion in
time; Third, if architected properly, ReRAM crossbars can
perform matrix inversion and vector-matrix multiplications which are important
to the second-order training algorithms.
Nevertheless, current ReRAM-based PIM techniques still face a key challenge
for accelerating the second-order training. The existing ReRAM-based matrix
inversion circuitry can only support 8-bit accuracy matrix inversion and the
computational precision is not sufficient for the second-order training that
needs at least 16-bit accurate matrix inversion. In this work, we propose a
method to achieve high-precision matrix inversion based on a proven 8-bit
matrix inversion (INV) circuitry and vector-matrix multiplication (VMM)
circuitry. We design \archname{}, a ReRAM-based PIM accelerator architecture
for the second-order training. Moreover, we propose a software mapping scheme
for \archname{} to further optimize the performance by fusing VMM and INV
crossbar. Experiment shows that \archname{} can achieve an average of
115.8/11.4 speedup and 41.9/12.8energy saving
compared to a GPU counterpart and PipeLayer on large-scale DNNs.Comment: 13pages, 13 figure