16,118 research outputs found
Yield Model Characterization For Analog Integrated Circuit Using Pareto-Optimal Surface
A novel technique is proposed in this paper that achieves a yield optimized design from a set of optimal performance points on the Pareto front. Trade-offs among performance functions are explored through multi-objective optimization and Monte Carlo simulation is used to find the design point producing the best overall yield. One advantage of the approach presented is a reduction in the computational cost normally associated with Monte Carlo simulation. The technique offers a yield optimized robust circuit design solution with transistor level accuracy. An example using an OTA is presented to demonstrate the effectiveness of the work
Physics-based large-signal sensitivity analysis of microwave circuits using technological parametric sensitivity from multidimensional semiconductor device models
The authors present an efficient approach to evaluate the large-signal (LS) parametric sensitivity of active semiconductor devices under quasi-periodic operation through accurate, multidimensional physics-based models. The proposed technique exploits efficient intermediate mathematical models to perform the link between physics-based analysis and circuit-oriented simulations, and only requires the evaluation of dc and ac small-signal (dc charge) sensitivities under general quasi-static conditions. To illustrate the technique, the authors discuss examples of sensitivity evaluation, statistical analysis, and doping profile optimization of an implanted MESFET to minimize intermodulation which makes use of LS parametric sensitivities under two-tone excitatio
Out-of-plane focusing grating couplers for silicon photonics integration with optical MRAM technology
We present the design methodology and experimental characterization of compact out-of-plane focusing grating couplers for integration with magnetoresistive random access memory technology. Focusing grating couplers have recently found attention as layer-couplers for photonic-electronic integration. The components we demonstrate are designed for a wavelength of 1550 nm, fabricated in a standard 220 nm SOI photonic platform and optimized given the fabrication restrictions for standard 193-nm UV lithography. For the first time, we extend the design based on the phase matching condition to a two-dimensional (2-D) grating design with two optical input ports. We further present the experimental characterization of the focusing behaviour by spatially probing the emitted beam with a tapered-and-lensed fiber and demonstrate the polarization controlling capabilities of the 2-D FGCs
Performance robustness analysis in machine-assisted design of photonic devices
Machine-assisted design of integrated photonic devices (e.g. through optimization and inverse design methods) is opening the possibility of exploring very large design spaces, novel functionalities and non-intuitive geometries. These methods are generally used to optimize performance figures-of-merit. On the other hand, the effect of manufacturing variability remains a fundamental challenge since small fabrication errors can have a significant impact on light propagation, especially in high-index-contrast platforms. Brute-force analysis of these variabilities during the main optimization process can become prohibitive, since a large number of simulations would be required. To this purpose, efficient stochastic techniques integrated in the design cycle allow to quickly assess the performance robustness and the expected fabrication yield of each tentative device generated by the optimization. In this invited talk we present an overview of the recent advances in the implementation of stochastic techniques in photonics, focusing in particular on stochastic spectral methods that have been regarded as a promising alternative to the classical Monte Carlo method. Polynomial chaos expansion techniques generate so called surrogate models by means of an orthogonal set of polynomials to efficiently represent the dependence of a function to statistical variabilities. They achieve a considerable reduction of the simulation time compared to Monte Carlo, at least for mid-scale problems, making feasible the incorporation of tolerance analysis and yield optimization within the photonic design flow
Robust and Efficient Uncertainty Quantification and Validation of RFIC Isolation
Modern communication and identification products impose demanding constraints on reliability of components. Due to this statistical constraints more and more enter optimization formulations of electronic products. Yield constraints often require efficient sampling techniques to obtain uncertainty quantification also at the tails of the distributions. These sampling techniques should outperform standard Monte Carlo techniques, since these latter ones are normally not efficient enough to deal with tail probabilities. One such a technique, Importance Sampling, has successfully been applied to optimize Static Random Access Memories (SRAMs) while guaranteeing very small failure probabilities, even going beyond 6-sigma variations of parameters involved. Apart from this, emerging uncertainty quantifications techniques offer expansions of the solution that serve as a response surface facility when doing statistics and optimization. To efficiently derive the coefficients in the expansions one either has to solve a large number of problems or a huge combined problem. Here parameterized Model Order Reduction (MOR) techniques can be used to reduce the work load. To also reduce the amount of parameters we identify those that only affect the variance in a minor way. These parameters can simply be set to a fixed value. The remaining parameters can be viewed as dominant. Preservation of the variation also allows to make statements about the approximation accuracy obtained by the parameter-reduced problem. This is illustrated on an RLC circuit. Additionally, the MOR technique used should not affect the variance significantly. Finally we consider a methodology for reliable RFIC isolation using floor-plan modeling and isolation grounding. Simulations show good comparison with measurements
Recommended from our members
Stochastic Yield Analysis of Rare Failure Events in High-Dimensional Variation Space
As semiconductor industry kept shrinking the feature size to nanometer scale, circuit reliability has become an area of growing concern due to the uncertainty introduced by process variations. For highly-replicated standard cells, the failure event for each individual component must be extremely rare in order to maintain sufficiently high yield rate. Existing yield analysis approaches works fine at low dimension, but less effective either when there are a large amount of circuit parameters, or when the failure samples are distributed in multiple regions. In this thesis, four novel high sigma analysis approaches have been proposed. First, we propose an adaptive importance sampling (AIS) algorithm. AIS has several iterations of sampling region adjustments, while existing methods pre-decide a static sampling distribution. At each iteration, AIS generates samples from current proposed distribution. Next, AIS carefully assigns weight to each sample based on its tilted occurrence probability between failure region and current failure region distribution. Then we design two adaptive frameworks based on Resampling and population Metropolis-Hastings (MH) to iteratively search for failure regions. Second, we develop an Adaptive Clustering and Sampling (ACS) method to estimate the failure rate of high-dimensional and multi-failure-region circuit cases. The basic idea of the algorithm is to cluster failure samples and build global sampling distribution at each iteration. Specifically, in clustering step, we propose a multi-cone clustering method, which partitions the parametric space and clusters failure samples. Then global sampling distribution is constructed from a set of weighted Gaussian distributions. Next, we calculate importance weight for each sample based on the discrepancy between sampling distribution and target distribution. Failure probability is updated at the end of each iteration. This clustering and sampling procedure proceeds iteratively until all the failure regions are covered.Moreover, two meta-model based approaches are proposed for high sigma analysis. The Low-Rank Tensor Approximation (LRTA) formulate the meta-model in tensor space by representing a multi-way tensor into a finite sum of rank-one tensor. The polynomial degree of our LRTA model grows linearly with circuit dimension, which makes it especially promising for high-dimensional circuit problems. Then we solve our LRTA model efficiently with a robust greedy algorithm, and calibrate iteratively with an adaptive sampling method. The meta-model based importance sampling (MIS) method utilizes Gaussian Process meta-model to construct quasi-optimal importance sampling distribution, and performs Markov Chain Monte Carlo (MCMC) simulation to generate new samples from the proposed distribution. By updating our global Importance Sampling estimator in an iterated framework, MIS leads to better efficiency and higher accuracy than traditional importance sampling methods. Experiment results validate that the proposed approaches are 3 orders faster than Monte Carlo, and more accurate than both academia solutions such as importance sampling and classification based methods, and industrial solutions such as mixture IS used by Intel
- âŠ