2 research outputs found

    Design and Simulation of Two Stage Wideband CMOS Amplifier in 90 NM Technology

    Get PDF
    Design and simulation of 7 GHz CMOS wideband amplifier(CMOSWA) using a modified cascode circuit realized in  90-nm CMOS technology is presented here. The proposed system consists of two stages, namely a modified folded cascode and an inductively degenerated common source amplifier. The circuit is experimented with and without a feedback network. This work discusses the performance variation as a function of reactive components, and the initial stage results in 22 dB gain,2.6 GHz bandwidth, and 40GHz unity gain-bandwidth. The circuit without the feedback network exhibits 30.7dB gain,4.8GHz bandwidth(BW), and 10GHz unity-gain bandwidth(UGB). The reactive feedback network's inclusion helped to achieve 38.7 dB gain, 6.95GHz BW, 30GHz UGB, and 55o phase margin. The circuit consumes 1.4mW power from a 1.8V power supply. Simulation results of the proposed circuit are comparable and better than the reported wideband designs in the literature. Realization of our proposed circuit would add value to the area of wideband amplifier design

    Overcoming Noise and Variations In Low-Precision Neural Networks

    Get PDF
    This work explores the impact of various design and training choices on the resilience of a neural network when subjected to noise and/or device variations. Simulations were performed under the expectation that the neural network would be implemented on analog hardware; this context asserts that there will be random noise within the circuit as well as variations in device characteristics between each fabricated device. The results show how noise can be added during the training process to reduce the impact of post-training noise. Architectural choices for the neural network also directly impact the performance variation between devices. The simulated neural networks were more robust to noise with a minimal architecture with fewer layers; if more neurons are needed for better fitting, networks with more neurons in shallow layers and fewer in deeper layers closer to the output tend to perform better. The paper also demonstrates that activation functions with lower slopes do a better job of suppressing noise in the neural network. It also shown that the accuracy can be made more consistent by introducing sparsity into the neural network. To that end, an evaluation is included of different methods for generating sparse architectures for smaller neural networks. A new method is proposed that consistently outperforms the most common methods used in larger, deeper networks.Ph.D
    corecore