8 research outputs found

    A Unified Learning Platform for Dynamic Frequency Scaling in Pipelined Processors

    Full text link
    A machine learning (ML) design framework is proposed for dynamically adjusting clock frequency based on propagation delay of individual instructions. A Random Forest model is trained to classify propagation delays in real-time, utilizing current operation type, current operands, and computation history as ML features. The trained model is implemented in Verilog as an additional pipeline stage within a baseline processor. The modified system is simulated at the gate-level in 45 nm CMOS technology, exhibiting a speed-up of 68% and energy reduction of 37% with coarse-grained ML classification. A speed-up of 95% is demonstrated with finer granularities at additional energy costs

    Exploiting Dual-Gate Ambipolar CNFETs for Scalable Machine Learning Classification

    Full text link
    Ambipolar carbon nanotube based field-effect transistors (AP-CNFETs) exhibit unique electrical characteristics, such as tri-state operation and bi-directionality, enabling systems with complex and reconfigurable computing. In this paper, AP-CNFETs are used to design a mixed-signal machine learning (ML) classifier. The classifier is designed in SPICE with feature size of 15 nm and operates at 250 MHz. The system is demonstrated based on MNIST digit dataset, yielding 90% accuracy and no accuracy degradation as compared with the classification of this dataset in Python. The system also exhibits lower power consumption and smaller physical size as compared with the state-of-the-art CMOS and memristor based mixed-signal classifiers

    Exploiting Machine Learning Against On-Chip Power Analysis Attacks: Tradeoffs and Design Considerations

    No full text

    Leveraging Independent Double-Gate FinFET Devices for Machine Learning Classification

    No full text

    Exploiting Machine Learning Against On-Chip Power Analysis Attacks: Tradeoffs and Design Considerations

    No full text
    Modern power analysis attacks (PAAs) and existing countermeasures pose unique challenges on the design of simultaneously secure, power efficient, and high-performance ICs. In a typical PAA, power information is collected with a monitoring circuit connected to the compromised device. The non-typical voltage variations induced on a power distribution network (PDN) by such a malicious probing are sensed with on-chip sensors and exploited in this paper for detecting PAAs in real-time using statistical analysis. A closed-form expression for the voltage variations caused by malicious probing is provided. Guidelines with respect to the PDN characteristics and number of sensors are proposed for securing power delivery. The PAA detection system is designed in a 45-nm standard CMOS process. Based on the simulation results, a PAA on an IBM benchmarked microprocessor is detected with the accuracy of 88% with 30 on-chip sensors. Power overhead of 0.34% and 14.3% is demonstrated in, respectively, the IBM microprocessor and a typical advanced encryption standard system. In a practical cryptographic device, security sensitive PDN regions can be identified, significantly reducing the number of the on-chip sensors

    A Machine Learning Pipeline Stage for Adaptive Frequency Adjustment

    No full text
    A machine learning (ML) design framework is proposed for adaptively adjusting clock frequency based on propagation delay of individual instructions. A random forest model is trained to classify propagation delays in real time, utilizing current operation type, current operands, and computation history as ML features. The trained model is implemented in Verilog as an additional pipeline stage within a baseline processor. The modified system is experimentally tested at the gate level in 45 nm CMOS technology, exhibiting a speedup of 70% and energy reduction of 30% with coarse-grained ML classification. A speedup of 89% is demonstrated with finer granularities with 15.5% reduction in energy consumption
    corecore