88 research outputs found
Technology Independent Synthesis of CMOS Operational Amplifiers
Analog circuit design does not enjoy as much automation as its digital counterpart. Analog sizing is inherently knowledge intensive and requires accurate modeling of the different parametric effects of the devices. Besides, the set of constraints in a typical analog design problem is large, involving complex tradeoffs. For these reasons, the task of modeling an analog design problem in a form viable for automation is much more tedious than the digital design. Consequently, analog blocks are still handcrafted intuitively and often become a bottleneck in the integrated circuit design, thereby increasing the time to market.
In this work, we address the problem of automatically solving an analog circuit design problem. Specifically, we propose methods to automate the transistor-level sizing of OpAmps. Given the specifications and the netlist of the OpAmp, our methodology produces a design that has the accuracy of the BSIM models used for simulation and the advantage of a quick design time. The approach is based on generating an initial first-order design and then refining it. In principle, the refining approach is a simulated-annealing scheme that uses (i) localized simulations and (ii) convex optimization scheme (COS). The optimal set of input variables for localized simulations has been selected by using techniques from Design of Experiments (DOE). To formulate the design problem as a COS problem, we have used monomial circuit models that are fitted from simulation data. These models accurately predict the performance of the circuit in the proximity of the initial guess. The models can also be used to gain valuable insight into the behavior of the circuit and understand the interrelations between the different performance constraints.
A software framework that implements this methodology has been coded in SKILL language of Cadence. The methodology can be applied to design different OpAmp topologies across different technologies. In other words, the framework is both technology independent and topology independent.
In addition, we develop a scheme to empirically model the small signal parameters like \u27gm\u27 and \u27gds\u27 of CMOS transistors. The monomial device models are reusable for a given technology and can be used to formulate the OpAmp design problem as a COS problem.
The efficacy of the framework has been demonstrated by automatically designing different OpAmp topologies across different technologies. We designed a two-stage OpAmp and a telescopic OpAmp in TSMC025 and AMI016 technologies. Our results show significant (10â15%) improvement in the performance of both the OpAmps in both the technologies. While the methodology has shown encouraging results in the sub-micrometer regime, the effectiveness of the tool has to be investigated in the deep-sub-micron technologies
Recommended from our members
The robust design of complex systems
Robust Engineering Design has evolved as an important methodology for the integration of quality with the process of design. The methodology encompasses the disciplines of experimental design, model building and optimization. First an experiment is conducted on a system (or a simulation of the system), second a model is built to emulate the system and finally the emulation model is used to optimize the system design. Applying these methods to large problems can be difficult and time-consuming because of the complexity of most design problems. It is the goal of this thesis to introduce methods which reduce problem complexity and so make the application of Robust Engineering Design (RED) methodology easier for large design problems.
By drawing from methods used in systems theory and circuit optimization several techniques are presented with the aim of reducing the complexity of performing experiments for Robust Engineering Design. A common framework for experimentation is created by combining a commercial circuit simulator with established methods for experimental design and model building. This provides the basis for experimentation in subsequent chapters. A method of design optimization with respect to quality is presented to complete the model-based Robust Engineering Design cycle.
Three approaches to reducing problem complexity are adopted. First a method of system decomposition is applied directly to an electronic circuit to reduce the size of experiment required for RED. Second a method of modelling system response functions is described which integrates the action of the circuit simulator with the model building process. Third information about system topology is used in the design of experiments to enhance the model-building process. Conclusions are drawn about the effectiveness of the approaches described with respect to the impact on problem complexity
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
MakerFluidics: low cost microfluidics for synthetic biology
Recent advancements in multilayer, multicellular, genetic logic circuits often rely on manual intervention throughout the computation cycle and orthogonal signals for each chemical âwireâ. These constraints can prevent genetic circuits from scaling. Microfluidic devices can be used to mitigate these constraints. However, continuous-flow microfluidics are largely designed through artisanal processes involving hand-drawing features and accomplishing design rule checks visually: processes that are also inextensible. Additionally, continuous-flow microfluidic routing is only a consideration during chip design and, once built, the routing structure becomes âfrozen in silicon,â or for many microfluidic chips âfrozen in polydimethylsiloxane (PDMS)â; any changes to fluid routing often require an entirely new device and control infrastructure. The cost of fabricating and controlling a new device is high in terms of time and money; attempts to reduce one cost measure are, generally, paid through increases in the other.
This work has three main thrusts: to create a microfluidic fabrication framework, called MakerFluidics, that lowers the barrier to entry for designing and fabricating microfluidics in a manner amenable to automation; to prove this methodology can design, fabricate, and control complex and novel microfluidic devices; and to demonstrate the methodology can be used to solve biologically-relevant problems.
Utilizing accessible technologies, rapid prototyping, and scalable design practices, the MakerFluidics framework has demonstrated its ability to design, fabricate and control novel, complex and scalable microfludic devices. This was proven through the development of a reconfigurable, continuous-flow routing fabric driven by a modular, scalable primitive called a transposer. In addition to creating complex microfluidic networks, MakerFluidics was deployed in support of cutting-edge, application-focused research at the Charles Stark Draper Laboratory. Informed by a design of experiments approach using the parametric rapid prototyping capabilities made possible by MakerFluidics, a plastic blood--bacteria separation device was optimized, demonstrating that the new device geometry can separate bacteria from blood while operating at 275% greater flow rate as well as reduce the power requirement by 82% for equivalent separation performance when compared to the state of the art.
Ultimately, MakerFluidics demonstrated the ability to design, fabricate, and control complex and practical microfluidic devices while lowering the barrier to entry to continuous-flow microfluidics, thus democratizing cutting edge technology beyond a handful of well-resourced and specialized labs
Recommended from our members
The design and analysis of computer experiments
Computer simulation modeling is an important part of engineering design. The process of creating quality products under variable conditions is called robust engineering design. Frequently these simulation models are expensive to run and it can take many runs to find the appropriate parameter settings of the engineering design. To reduce the cost of robust engineering design, it has been proposed that statistical models be used to predict the results of the simulator at unobserved values of the inputs. This involves running experiments on the computer simulation models, or computer experiments. Computer experiments have no random error and frequently involve a large number of experimental factors; because of this, there are many reasons why standard prediction and design methods may not work well.
Methods from spatial statistics, frequently refered to as kriging, are used to predict new observations of the simulation model. A generalized linear model with unknown covariance parameters is used on several examples of high dimensions. The model requires estimation of the covariance function parameters and methods are described for parameter estimation and model building.
Latin hypercube sampling is used for the experimental design. Latin hypercube sampling is as easy to use as Monte Carlo sampling and has been shown to have better estimation properties. The space filling properties of Latin hypercube sampling are investigated here and shown to fill the design space more uniformly than Monte Carlo sampling.
These statistical methods are applied to two circuit simulation models. The results show that these methods work well on computer experiments and can form the basis for a methodology in robust engineering design. Although these methods have been applied to circuit design problems, the methods are applicable to a wide variety of computer simulation models.
Robust engineering design received a great deal of attention when Taguchiâs ideas were introduced. An investigation into the methods that Taguchi introduced revealed some shortcomings from a statistical view. General conclusions are drawn about the efficacy of traditional Taguchi methods compared with the prefered model-based approach of the thesis
Decision-maker Trade-offs In Multiple Response Surface Optimization
The focus of this dissertation is on improving decision-maker trade-offs and the development of a new constrained methodology for multiple response surface optimization. There are three key components of the research: development of the necessary conditions and assumptions associated with constrained multiple response surface optimization methodologies; development of a new constrained multiple response surface methodology; and demonstration of the new method. The necessary conditions for and assumptions associated with constrained multiple response surface optimization methods were identified and found to be less restrictive than requirements previously described in the literature. The conditions and assumptions required for a constrained method to find the most preferred non-dominated solution are to generate non-dominated solutions and to generate solutions consistent with decision-maker preferences among the response objectives. Additionally, if a Lagrangian constrained method is used, the preservation of convexity is required in order to be able to generate all non-dominated solutions. The conditions required for constrained methods are significantly fewer than those required for combined methods. Most of the existing constrained methodologies do not incorporate any provision for a decision-maker to explicitly determine the relative importance of the multiple objectives. Research into the larger area of multi-criteria decision-making identified the interactive surrogate worth trade-off algorithm as a potential methodology that would provide that capability in multiple response surface optimization problems. The ISWT algorithm uses an Δ-constraint formulation to guarantee a non-dominated solution, and then interacts with the decision-maker after each iteration to determine the preference of the decision-maker in trading-off the value of the primary response for an increase in value of a secondary response. The current research modified the ISWT algorithm to develop a new constrained multiple response surface methodology that explicitly accounts for decision-maker preferences. The new Modified ISWT (MISWT) method maintains the essence of the original method while taking advantage of the specific properties of multiple response surface problems to simplify the application of the method. The MISWT is an accessible computer-based implementation of the ISWT. Five test problems from the multiple response surface optimization literature were used to demonstrate the new methodology. It was shown that this methodology can handle a variety of types and numbers of responses and independent variables. Furthermore, it was demonstrated that the methodology can be successful using a priori information from the decision-maker about bounds or targets or can use the extreme values obtained from the region of operability. In all cases, the methodology explicitly considered decision-maker preferences and provided non-dominated solutions. The contribution of this method is the removal of implicit assumptions and includes the decision-maker in explicit trade-offs among multiple objectives or responses
Recommended from our members
An investigation into the application of machine learning in information retrieval
There is an increasing variety of online databases available which are also evergrowing in size. In retrieving information from these sources, it is important not only to have effective and efficient retrieval techniques but also to enable some form of adaptation to usersâ specific needs. Frequent users, in particular, should be able to benefit from their high use of the information retrieval system. A machine learning approach can be applied to help the system adapt to usersâ specific needs.
It is argued that users have a particular context within which their queries are formed. It is likely that consecutive queries for a particular user will be related in that they will be part of the same context. Thus, a context learner is proposed.
In this investigation, the context learner is used for enhancing document ordering in partial match systems
- âŠ