2,006 research outputs found

    Surrogate based Optimization and Verification of Analog and Mixed Signal Circuits

    Get PDF
    Nonlinear Analog and Mixed Signal (AMS) circuits are very complex and expensive to design and verify. Deeper technology scaling has made these designs susceptible to noise and process variations which presents a growing concern due to the degradation in the circuit performances and risks of design failures. In fact, due to process parameters, AMS circuits like phase locked loops may present chaotic behavior that can be confused with noisy behavior. To design and verify circuits, current industrial designs rely heavily on simulation based verification and knowledge based optimization techniques. However, such techniques lack mathematical rigor necessary to catch up with the growing design constraints besides being computationally intractable. Given all aforementioned barriers, new techniques are needed to ensure that circuits are robust and optimized despite process variations and possible chaotic behavior. In this thesis, we develop a methodology for optimization and verification of AMS circuits advancing three frontiers in the variability-aware design flow. The first frontier is a robust circuit sizing methodology wherein a multi-level circuit optimization approach is proposed. The optimization is conducted in two phases. First, a global sizing phase powered by a regional sensitivity analysis to quickly scout the feasible design space that reduces the optimization search. Second, nominal sizing step based on space mapping of two AMS circuits models at different levels of abstraction is developed for the sake of breaking the re-design loop without performance penalties. The second frontier concerns a dynamics verification scheme of the circuit behavior (i.e., study the chaotic vs. stochastic circuit behavior). It is based on a surrogate generation approach and a statistical proof by contradiction technique using Gaussian Kernel measure in the state space domain. The last frontier focus on quantitative verification approaches to predict parametric yield for both a single and multiple circuit performance constraints. The single performance approach is based on a combination of geometrical intertwined reachability analysis and a non-parametric statistical verification scheme. On the other hand, the multiple performances approach involves process parameter reduction, state space based pattern matching, and multiple hypothesis testing procedures. The performance of the proposed methodology is demonstrated on several benchmark analog and mixed signal circuits. The optimization approach greatly improves computational efficiency while locating a comparable/better design point than other approaches. Moreover, great improvements were achieved using our verification methods with many orders of speedup compared to existing techniques

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Stochastic Testing Method for Transistor-Level Uncertainty Quantification Based on Generalized Polynomial Chaos

    Get PDF
    Uncertainties have become a major concern in integrated circuit design. In order to avoid the huge number of repeated simulations in conventional Monte Carlo flows, this paper presents an intrusive spectral simulator for statistical circuit analysis. Our simulator employs the recently developed generalized polynomial chaos expansion to perform uncertainty quantification of nonlinear transistor circuits with both Gaussian and non-Gaussian random parameters. We modify the nonintrusive stochastic collocation (SC) method and develop an intrusive variant called stochastic testing (ST) method. Compared with the popular intrusive stochastic Galerkin (SG) method, the coupled deterministic equations resulting from our proposed ST method can be solved in a decoupled manner at each time point. At the same time, ST requires fewer samples and allows more flexible time step size controls than directly using a nonintrusive SC solver. These two properties make ST more efficient than SG and than existing SC methods, and more suitable for time-domain circuit simulation. Simulation results of several digital, analog and RF circuits are reported. Since our algorithm is based on generic mathematical models, the proposed ST algorithm can be applied to many other engineering problems

    A 1Gsample/s 6-bit flash A/D converter with a combined chopping and averaging technique for reduced distortion in 0.18(mu)m CMOS

    Get PDF
    Hard disk drive applications require a high Spurious Free Dynamic Range (SFDR), 6-bit Analog-to-Digital Converter (ADC) at conversion rates of 1GHz and beyond. This work proposes a robust, fault-tolerant scheme to achieve high SFDR in an av- eraging flash A/D converter using comparator chopping. Chopping of comparators in a flash A/D converter was never previously implemented due to lack of feasibility in implementing multiple, uncorrelated, high speed random number generators. This work proposes a novel array of uncorrelated truly binary random number generators working at 1GHz to chop all comparators. Chopping randomizes the residual offset left after averaging, further pushing the dynamic range of the converter. This enables higher accuracy and lower bit-error rate for high speed disk-drive read channels. Power consumption and area are reduced because of the relaxed design requirements for the same linearity. The technique has been verified in Matlab simulations for a 6-bit 1Gsamples/s flash ADC under case of process gradients with non-zero mean offsets as high as 60mV and potentially serious spot offset errors as high as 1V for a 2V peak to peak input signal. The proposed technique exhibits an improvement of over 15dB compared to pure averaging flash converters for all cases. The circuit-level simulation results, for a 1V peak to peak input signal, demon- strate superior performance. The reported ADC was fabricated in TSMC 0.18 ??mCMOS process. It occupies 8.79mm2 and consumes about 400mW from 1.8V power supply at 1GHz. The targeted SFDR performance for the fabricated chip is at least 45dB for a 256MHz input sine wave, sampled at 1GHz, about 10dB improvement on the 6-bit flash ADCs in the literature

    On the Application of PSpice for Localised Cloud Security

    Get PDF
    The work reported in this thesis commenced with a review of methods for creating random binary sequences for encoding data locally by the client before storing in the Cloud. The first method reviewed investigated evolutionary computing software which generated noise-producing functions from natural noise, a highly-speculative novel idea since noise is stochastic. Nevertheless, a function was created which generated noise to seed chaos oscillators which produced random binary sequences and this research led to a circuit-based one-time pad key chaos encoder for encrypting data. Circuit-based delay chaos oscillators, initialised with sampled electronic noise, were simulated in a linear circuit simulator called PSpice. Many simulation problems were encountered because of the nonlinear nature of chaos but were solved by creating new simulation parts, tools and simulation paradigms. Simulation data from a range of chaos sources was exported and analysed using Lyapunov analysis and identified two sources which produced one-time pad sequences with maximum entropy. This led to an encoding system which generated unlimited, infinitely-long period, unique random one-time pad encryption keys for plaintext data length matching. The keys were studied for maximum entropy and passed a suite of stringent internationally-accepted statistical tests for randomness. A prototype containing two delay chaos sources initialised by electronic noise was produced on a double-sided printed circuit board and produced more than 200 Mbits of OTPs. According to Vladimir Kotelnikov in 1941 and Claude Shannon in 1945, one-time pad sequences are theoretically-perfect and unbreakable, provided specific rules are adhered to. Two other techniques for generating random binary sequences were researched; a new circuit element, memristance was incorporated in a Chua chaos oscillator, and a fractional-order Lorenz chaos system with order less than three. Quantum computing will present many problems to cryptographic system security when existing systems are upgraded in the near future. The only existing encoding system that will resist cryptanalysis by this system is the unconditionally-secure one-time pad encryption

    Reservoir Computing with Boolean Logic Network Circuits

    Get PDF
    To push the frontiers of machine learning, completely new computing architectures must be explored which efficiently use hardware resources. We test an unconventional use of digital logic gate circuits for reservoir computing, a machine learning algorithm that is used for rapid time series processing. In our approach, logic gates are configured into networks that can exhibit complex dynamics. Rather than the gates explicitly computing pre-programmed instructions, they are used collectively as a dynamical system that transforms input data into a higher dimensional representation. We probe the dynamics of such circuits using discrete components on a circuit board as well as an FPGA implementation. We show favorable machine learning performance, including radiofrequency classification accuracy comparableto a state of the art convolutional neural network with a fraction of the trainable parameters. Finally, we discuss the design and fabrication of a reservoir computing ASIC for high-speed time series processing

    Emerging opportunities and challenges for the future of reservoir computing

    Get PDF
    Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines

    Emerging opportunities and challenges for the future of reservoir computing

    Get PDF
    Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines
    corecore