2,067 research outputs found

    An overview of artificial intelligence applications for power electronics

    Get PDF

    Classification using Dopant Network Processing Units

    Get PDF

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Multi-objective Optimization of the Fast Neutron Source by Machine Learning

    Get PDF
    The design and optimization of nuclear systems can be a difficult task, often with prohibitively large design spaces, as well as both competing and complex objectives and constraints. When faced with such an optimization, the task of designing an algorithm for this optimization falls to engineers who must apply engineering knowledge and experience to reduce the scope of the optimization to a manageable size. When sufficient computational resources are available, unsupervised optimization can be used. The optimization of the Fast Neutron Source (FNS) at the University of Tennessee is presented as an example for the methodologies developed in this work. The FNS will be a platform for subcritical nuclear experiments that will reduce specific nuclear data uncertainties of next-generation reactor designs. It features a coupled fast-thermal design with interchangeable components around an experimental volume where a neutron spectrum, derived from a next-generation reactor design, will be produced. Two complete genetic algorithm optimizations of an FNS experiment targeting a sodium fast reactor neutron spectrum are presented. The first optimization is a standard implementation of a genetic algorithm. The second utilizes neural network based surrogate models to produce better FNS designs. In this second optimization, the surrogate models are trained during the execution of the algorithm and gradually learn to replace the expensive objective functions. The second optimization outperformed by increasing the total neutron flux 24\%, increased the maximum similarity of the neutron flux spectrum, as measured by representativity, from 0.978 to 0.995 and producing configurations which were more sensitive to material insertions by +124 pcm and -217 pcm. In addition to the genetic algorithm optimizations, a second optimization methodology using directly calculated derivatives is presented. The methods explored in this work show how complex nuclear systems can be optimized using both gradient informed and uninformed methods. These methods are augmented using both neural network surrogate models and directly calculated derivatives, which allow for better optimization outcomes. These methods are applied to the optimization of several variations of FNS experiments and are shown to produce a more robust suite of potential designs given similar computational resources
    corecore