2,207 research outputs found

    An Efficient Integrated Circuit Simulator And Time Domain Adjoint Sensitivity Analysis

    Get PDF
    In this paper, we revisit time-domain adjoint sensitivity with a circuit theoretic approach and an efficient solution is clearly stated in terms of device level. Key is the linearization of the energy storage elements (e.g., capacitance and inductance) and nonlinear memoryless elements (e.g., MOS, BJT DC characteristics) at each time step. Due to the finite precision of computation, numerical errors that accumulate across timesteps can arise in nonlinear elements

    Numerical Derivative-based Flexible Integration Algorithm for Power Electronic Systems Simulation Considering Nonlinear Components

    Full text link
    Simulation is an efficient tool in the design and control of power electronic systems. However, quick and accurate simulation of them is still challenging, especially when the system contains a large number of switches and state variables. Conventional general-purpose integration algorithms assume nonlinearity within systems but face inefficiency in handling the piecewise characteristics of power electronic switches. While some specialized algorithms can adapt to the piecewise characteristics, most of these methods require systems to be piecewise linear. In this article, a numerical derivative-based flexible integration algorithm is proposed. This algorithm can adapt to the piecewise characteristic caused by switches and have no difficulty when nonlinear non-switching components are present in the circuit. This algorithm consists of a recursive numerical scheme that obtains high-order time derivatives of nonlinear components and a decoupling strategy that further increases computational efficiency. The proposed method is applied to solve a motor derive system and a large-scale power conversion system (PCS) to verify its accuracy and efficiency by comparing experimental waveforms and simulated results given by commercial software. Our proposed method demonstrates several-fold acceleration compared to multiple commonly used algorithms in Simulink.Comment: 10 pages, 8 figure

    Circuit simulation via matrix exponential method for stiffness handling and parallel processing

    Get PDF
    We propose an advanced matrix exponential method (MEXP) to handle the transient simulation of stiff circuits and enable parallel simulation. We analyze the rapid decaying of fast transition elements in Krylov subspace approximation of matrix exponential and leverage such scaling effect to leap larger steps in the later stage of time marching. Moreover, matrix-vector multiplication and restarting scheme in our method provide better scalability and parallelizability than implicit methods. The performance of ordinary MEXP can be improved up to 4.8 times for stiff cases, and the parallel implementation leads to another 11 times speedup. Our approach is demonstrated to be a viable tool for ultra-large circuit simulations (with 1.6M ∼ 12M nodes) that are not feasible with existing implicit methods. © 2012 ACM.published_or_final_versio

    A neural network model of normal and abnormal learning and memory consolidation

    Get PDF
    The amygdala and hippocampus interact with thalamocortical systems to regulate cognitive-emotional learning, and lesions of amygdala, hippocampus, thalamus, and cortex have different effects depending on the phase of learning when they occur. In examining eyeblink conditioning data, several questions arise: Why is the hippocampus needed for trace conditioning where there is a temporal gap between the conditioned stimulus offset and the onset of the unconditioned stimulus, but not needed for delay conditioning where stimuli temporally overlap and co-terminate? Why do amygdala lesions made before or immediately after training decelerate conditioning while those made later have no impact on conditioned behavior? Why do thalamic lesions degrade trace conditioning more than delay conditioning? Why do hippocampal lesions degrade recent learning but not temporally remote learning? Why do cortical lesions degrade temporally remote learning, and cause amnesia, but not recent or post-lesion learning? How is temporally graded amnesia caused by ablation of medial prefrontal cortex? How are mechanisms of motivated attention and the emergent state of consciousness linked during conditioning? How do neurotrophins, notably Brain Derived Neurotrophic Factor (BDNF), influence memory formation and consolidation? A neural model, called neurotrophic START, or nSTART, proposes answers to these questions. The nSTART model synthesizes and extends key principles, mechanisms, and properties of three previously published brain models of normal behavior. These three models describe aspects of how the brain can learn to categorize objects and events in the world; how the brain can learn the emotional meanings of such events, notably rewarding and punishing events, through cognitive-emotional interactions; and how the brain can learn to adaptively time attention paid to motivationally important events, and when to respond to these events, in a context-appropriate manner. The model clarifies how hippocampal adaptive timing mechanisms and BDNF may bridge the gap between stimuli during trace conditioning and thereby allow thalamocortical and corticocortical learning to take place and be consolidated. The simulated data arise as emergent properties of several brain regions interacting together. The model overcomes problems of alternative memory models, notably models wherein memories that are initially stored in hippocampus move to the neocortex during consolidation

    A neural network model of normal and abnormal learning and memory consolidation

    Get PDF
    The amygdala and hippocampus interact with thalamocortical systems to regulate cognitive-emotional learning, and lesions of amygdala, hippocampus, thalamus, and cortex have different effects depending on the phase of learning when they occur. In examining eyeblink conditioning data, several questions arise: Why is the hippocampus needed for trace conditioning where there is a temporal gap between the conditioned stimulus offset and the onset of the unconditioned stimulus, but not needed for delay conditioning where stimuli temporally overlap and co-terminate? Why do amygdala lesions made before or immediately after training decelerate conditioning while those made later have no impact on conditioned behavior? Why do thalamic lesions degrade trace conditioning more than delay conditioning? Why do hippocampal lesions degrade recent learning but not temporally remote learning? Why do cortical lesions degrade temporally remote learning, and cause amnesia, but not recent or post-lesion learning? How is temporally graded amnesia caused by ablation of medial prefrontal cortex? How are mechanisms of motivated attention and the emergent state of consciousness linked during conditioning? How do neurotrophins, notably Brain Derived Neurotrophic Factor (BDNF), influence memory formation and consolidation? A neural model, called neurotrophic START, or nSTART, proposes answers to these questions. The nSTART model synthesizes and extends key principles, mechanisms, and properties of three previously published brain models of normal behavior. These three models describe aspects of how the brain can learn to categorize objects and events in the world; how the brain can learn the emotional meanings of such events, notably rewarding and punishing events, through cognitive-emotional interactions; and how the brain can learn to adaptively time attention paid to motivationally important events, and when to respond to these events, in a context-appropriate manner. The model clarifies how hippocampal adaptive timing mechanisms and BDNF may bridge the gap between stimuli during trace conditioning and thereby allow thalamocortical and corticocortical learning to take place and be consolidated. The simulated data arise as emergent properties of several brain regions interacting together. The model overcomes problems of alternative memory models, notably models wherein memories that are initially stored in hippocampus move to the neocortex during consolidation

    Time-domain analysis of large-scale circuits by matrix exponential method with adaptive control

    Get PDF
    We propose an explicit numerical integration method based on matrix exponential operator for transient analysis of large-scale circuits. Solving the differential equation analytically, the limiting factor of maximum time step changes largely from the stability and Taylor truncation error to the error in computing the matrix exponential operator. We utilize Krylov subspace projection to reduce the computation complexity of matrix exponential operator. We also devise a prediction-correction scheme tailored for the matrix exponential approach to dynamically adjust the step size and the order of Krylov subspace approximation. Numerical experiments show the advantages of the proposed method compared with the implicit trapezoidal method. © 1982-2012 IEEE.published_or_final_versio

    Neural Models of Temporally Organized Behaviors: Handwriting Production and Working Memory

    Full text link
    Advanced Research Projects Agency (ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309

    A practical regularization technique for modified nodal analysis in large-scale time-domain circuit simulation

    Get PDF
    Fast full-chip time-domain simulation calls for advanced numerical integration techniques with capability to handle the systems with (tens of) millions of variables resulting from the modified nodal analysis (MNA). General MNA formulation, however, leads to a differential algebraic equation (DAE) system with singular coefficient matrix, for which most of explicit methods, which usually offer better scalability than implicit methods, are not readily available. In this paper, we develop a practical two-stage strategy to remove the singularity in MNA equations of large-scale circuit networks. A topological index reduction is first applied to reduce the DAE index of the MNA equation to one. The index-1 system is then fed into a systematic process to eliminate excess variables in one run, which leads to a nonsingular system. The whole regularization process is devised with emphasis on exact equivalence, low complexity, and sparsity preservation, and is thus well suited to handle extremely large circuits. © 2012 IEEE.published_or_final_versio

    Software-controlled processor speed setting for low-power streaming multimedia

    Full text link

    CPU-less robotics: distributed control of biomorphs

    Get PDF
    Traditional robotics revolves around the microprocessor. All well-known demonstrations of sensory guided motor control, such as jugglers and mobile robots, require at least one CPU. Recently, the availability of fast CPUs have made real-time sensory-motor control possible, however, problems with high power consumption and lack of autonomy still remain. In fact, the best examples of real-time robotics are usually tethered or require large batteries. We present a new paradigm for robotics control that uses no explicit CPU. We use computational sensors that are directly interfaced with adaptive actuation units. The units perform motor control and have learning capabilities. This architecture distributes computation over the entire body of the robot, in every sensor and actuator. Clearly, this is similar to biological sensory- motor systems. Some researchers have tried to model the latter in software, again using CPUs. We demonstrate this idea in with an adaptive locomotion controller chip. The locomotory controller for walking, running, swimming and flying animals is based on a Central Pattern Generator (CPG). CPGs are modeled as systems of coupled non-linear oscillators that control muscles responsible for movement. Here we describe an adaptive CPG model, implemented in a custom VLSI chip, which is used to control an under-actuated and asymmetric robotic leg
    • …
    corecore