8 research outputs found

    Fatigue life estimation of surface mount type capacitors under random vibrations

    No full text
    Includes bibliographical references (leaf [44])Fatigue failures are known to occur in leads of surface mount electronic components, when electronic printed circuit assemblies are subjected to severe vibrational loads. Exposure to random vibration is a test criterion used to screen, compare, and validate components for automotive and aerospace applications. As a result, the reliability of surface mount type interconnections is reduced. Fatigue life of leads in such packages is a complex function of constitutive properties, architecture and geometry of the package and vibrational loads applied to it. A logical and consistent method for calculating fatigue life due to random vibration is needed. Finite element analysis is sometimes employed to help understand or correct random vibration induced failures. The main objective of the present study is to formulate a model based on finite element analysis, to analyze fatigue endurance of leads in surface mount type packages. The goal is to formulate a tool which can be used to examine the influence of several parameters on fatigue life of leads.M.S. (Master of Science

    Deep in-memory architectures for machine learning

    No full text

    A Variation-Tolerant In-Memory Machine Learning Classifier via On-Chip Training

    No full text

    Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype

    Get PDF
    The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and a deep network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a X86 CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude.peerReviewe

    The capabilities of nanoelectronic 2-D materials for bio-inspired computing and drug delivery indicate their significance in modern drug design

    No full text
    corecore