2,139 research outputs found

    Geometrically-constrained, parasitic-aware synthesis of analog ICs

    Get PDF
    In order to speed up the design process of analog ICs, iterations between different design stages should be avoided as much as possible. More specifically, spins between electrical and physical synthesis should be reduced for this is a very time-consuming task: if circuit performance including layout-induced degradations proves unacceptable, a re-design cycle must be entered, and electrical, physical, or both synthesis processes, would have to be repeated. It is also worth noting that if geometric optimization (e.g., area minimization) is undertaken after electrical synthesis, it may add up as another source of unexpected degradation of the circuit performance due to the impact of the geometric variables (e.g., transistor folds) on the device and the routing parasitic values. This awkward scenario is caused by the complete separation of said electrical and physical synthesis, a design practice commonly followed so far. Parasitic-aware synthesis, consisting in including parasitic estimates to the circuit netlist directly during electrical synthesis, has been proposed as solution. While most of the reported contributions either tackle parasitic-aware synthesis without paying special attention to geometric optimization or approach both issues only partially, this paper addresses the problem in a unified way. In what has been called layout-aware electrical synthesis, a simulation-based optimization algorithm explores the design space with geometric variables constrained to meet certain user-defined goals, which provides reliable estimates of layout-induced parasitics at each iteration, and, thereby, accurate evaluation of the circuit ultimate performance. This technique, demonstrated here through several design examples, requires knowing layout details beforehand; to facilitate this, procedural layout generation is used as physical synthesis approach due to its rapidness and ability to capture analog layout know-how.Ministerio de Educación y Ciencia TEC2004-0175

    Very Large Scale Integration Cell Based Path Extractor For Physical To Layout Mapping In Fault Isolation Work

    Get PDF
    Debug and diagnosis in post-silicon challenges the technological advancement in Physical-to-Layout Mapping capabilities. Areas that require such innovation are fault isolation work in failure analysis of semiconductor devices, at post-silicon stage. Since fault isolation work begins at Register Transfer Level (RTL) level to form a suspected boundary consisting of multiple logics from one end to the other, layout to schematic mapping automation tool helps to identify fault in design within given boundary. Therefore the development of a path extractor program which is capable of extracting all possible paths from these start to end signals can save engineers time in tracing components involved between a fault line. This feature is extremely significant in Electronic Design Automation (EDA) as it can provide results of net name sequences stored in a database of mapper files. These mapper files can be used in layout design debug as the net sequence represents schematic signals. To be able to retrieve all possible signals involved within a suspected boundary is a popular search computational problem. Therefore the path extractor program proposed incorporates the characteristics of a depth-first search algorithm by considering the specifications of a cell-based design. The objectives achieved in this research are proven reliable with path extraction results consistent even with search depth manipulation. Performance differs an average of 12.6 % (iteration count) with keeping maximum allowable depth of search constant. Paths of net sequences were consistent throughout the verification of the path extractor program. This development and study of the path extract method carries significance in areas of EDA and debug diagnosis work

    Time-domain optimization of amplifiers based on distributed genetic algorithms

    Get PDF
    Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer EngineeringThe work presented in this thesis addresses the task of circuit optimization, helping the designer facing the high performance and high efficiency circuits demands of the market and technology evolution. A novel framework is introduced, based on time-domain analysis, genetic algorithm optimization, and distributed processing. The time-domain optimization methodology is based on the step response of the amplifier. The main advantage of this new time-domain methodology is that, when a given settling-error is reached within the desired settling-time, it is automatically guaranteed that the amplifier has enough open-loop gain, AOL, output-swing (OS), slew-rate (SR), closed loop bandwidth and closed loop stability. Thus, this simplification of the circuit‟s evaluation helps the optimization process to converge faster. The method used to calculate the step response expression of the circuit is based on the inverse Laplace transform applied to the transfer function, symbolically, multiplied by 1/s (which represents the unity input step). Furthermore, may be applied to transfer functions of circuits with unlimited number of zeros/poles, without approximation in order to keep accuracy. Thus, complex circuit, with several design/optimization degrees of freedom can also be considered. The expression of the step response, from the proposed methodology, is based on the DC bias operating point of the devices of the circuit. For this, complex and accurate device models (e.g. BSIM3v3) are integrated. During the optimization process, the time-domain evaluation of the amplifier is used by the genetic algorithm, in the classification of the genetic individuals. The time-domain evaluator is integrated into the developed optimization platform, as independent library, coded using C programming language. The genetic algorithms have demonstrated to be a good approach for optimization since they are flexible and independent from the optimization-objective. Different levels of abstraction can be optimized either system level or circuit level. Optimization of any new block is basically carried-out by simply providing additional configuration files, e.g. chromosome format, in text format; and the circuit library where the fitness value of each individual of the genetic algorithm is computed. Distributed processing is also employed to address the increasing processing time demanded by the complex circuit analysis, and the accurate models of the circuit devices. The communication by remote processing nodes is based on Message Passing interface (MPI). It is demonstrated that the distributed processing reduced the optimization run-time by more than one order of magnitude. Platform assessment is carried by several examples of two-stage amplifiers, which have been optimized and successfully used, embedded, in larger systems, such as data converters. A dedicated example of an inverter-based self-biased two-stage amplifier has been designed, laid-out and fabricated as a stand-alone circuit and experimentally evaluated. The measured results are a direct demonstration of the effectiveness of the proposed time-domain optimization methodology.Portuguese Foundation for the Science and Technology (FCT

    A Combined Gate Replacement and Input Vector Control Approach

    Get PDF
    Due to the increasing role of leakage power in CMOS circuit's total power dissipation, leakage reduction has attracted a lot of attention recently. Input vector control (IVC) takes advantage of the transistor stack effect to apply the minimum leakage vector (MLV) to the primary inputs of the circuit during the standby mode. However, IVC techniques become less effective for circuits of large logic depth because theMLV at primary inputs has little impact on internal gates at high logic level. In this paper, we propose a technique to overcome this limitation by directly controlling the inputs to the internal gates that are in their worst leakage states. Specifically, we propose a gate replacement technique that replaces such gates by other library gates while maintaining the circuit's correct functionality at the active mode. This modification of the circuit does not require changes of the design flow, but it opens the door for further leakage reduction, when the MLV is not effective. We then describe a divideand- conquer approach that combines the gate replacement and input vector control techniques. It integrates an algorithm that finds the optimal MLV for tree circuits, a fast gate replacement heuristic, and a genetic algorithm that connects the tree circuits. We have conducted experiments on all the MCNC91 benchmark circuits. The results reveal that 1) the gate replacement technique itself can provide 10% more leakage current reduction over the best known IVC methods with no delay penalty and little area increase; 2) the divide-and-conquer approach outperforms the best pure IVC method by 24% and the existing control point insertion method by 12%; 3) when we obtain the optimal MLV for small circuits from exhaustive search, the proposed gate replacement alone can still reduce leakage current by 13% while the divide-and-conquer approach reduces 17%

    Reconfigurable Architectures and Systems for IoT Applications

    Get PDF
    abstract: Internet of Things (IoT) has become a popular topic in industry over the recent years, which describes an ecosystem of internet-connected devices or things that enrich the everyday life by improving our productivity and efficiency. The primary components of the IoT ecosystem are hardware, software and services. While the software and services of IoT system focus on data collection and processing to make decisions, the underlying hardware is responsible for sensing the information, preprocess and transmit it to the servers. Since the IoT ecosystem is still in infancy, there is a great need for rapid prototyping platforms that would help accelerate the hardware design process. However, depending on the target IoT application, different sensors are required to sense the signals such as heart-rate, temperature, pressure, acceleration, etc., and there is a great need for reconfigurable platforms that can prototype different sensor interfacing circuits. This thesis primarily focuses on two important hardware aspects of an IoT system: (a) an FPAA based reconfigurable sensing front-end system and (b) an FPGA based reconfigurable processing system. To enable reconfiguration capability for any sensor type, Programmable ANalog Device Array (PANDA), a transistor-level analog reconfigurable platform is proposed. CAD tools required for implementation of front-end circuits on the platform are also developed. To demonstrate the capability of the platform on silicon, a small-scale array of 24×25 PANDA cells is fabricated in 65nm technology. Several analog circuit building blocks including amplifiers, bias circuits and filters are prototyped on the platform, which demonstrates the effectiveness of the platform for rapid prototyping IoT sensor interfaces. IoT systems typically use machine learning algorithms that run on the servers to process the data in order to make decisions. Recently, embedded processors are being used to preprocess the data at the energy-constrained sensor node or at IoT gateway, which saves considerable energy for transmission and bandwidth. Using conventional CPU based systems for implementing the machine learning algorithms is not energy-efficient. Hence an FPGA based hardware accelerator is proposed and an optimization methodology is developed to maximize throughput of any convolutional neural network (CNN) based machine learning algorithm on a resource-constrained FPGA.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Optimal allocation of distributed generation for power loss reduction and voltage profile improvement

    Get PDF
    Distributed generation (DG) integration in a distribution system has increased to high penetration levels. There is a need to improve technical benefits of DG integration by optimal allocation in a power system network. These benefits include electrical power losses reduction and voltage profile improvement. Optimal DG location and sizing in a power system distribution network with the aim of reducing system power losses and improving the voltage profile still remain a major problem. Though much research has been done on optimal DG location and sizing in a power system distribution network with the aim of reducing system power losses and improving the voltage profile, most of the existing works in the literature use several techniques such as computation, artificial intelligence and an analytical approach, but they still suffer from several drawbacks. As a result, much can still be done in coming up with new algorithms to improve the already existing ones so as to address this important issue more efficiently and effectively. The majority of the proposed algorithms emphasize real power losses only in their formulations. They ignore the reactive power losses which are the key to the operation of the power systems. Hence, there is an urgent need for an approach that will incorporate reactive power and voltage profile in the optimization process, such that the effect of high power losses and poor voltage profile can be mitigated. This research used Genetic Algorithm and Improved Particle Swarm Optimization (GA-IPSO) for optimal placement and sizing of DG for power loss reduction and improvement of voltage profile. GA-IPSO is used to optimize DG location and size while considering both real and reactive power losses. The real and reactive power as well as power loss sensitivity factors were utilized in identifying the candidate buses for DG allocation. The GA-IPSO algorithm was programmed in Matlab. This algorithm reduces the search space for the search process, increases its rate of convergence and also eliminates the possibility of being trapped in local minima. Also, the new approach will help in reducing power loss and improve the voltage profile via placement and sizing

    Circuit Optimisation using Device Layout Motifs

    Get PDF
    Circuit designers face great challenges as CMOS devices continue to scale to nano dimensions, in particular, stochastic variability caused by the physical properties of transistors. Stochastic variability is an undesired and uncertain component caused by fundamental phenomena associated with device structure evolution, which cannot be avoided during the manufacturing process. In order to examine the problem of variability at atomic levels, the 'Motif' concept, defined as a set of repeating patterns of fundamental geometrical forms used as design units, is proposed to capture the presence of statistical variability and improve the device/circuit layout regularity. A set of 3D motifs with stochastic variability are investigated and performed by technology computer aided design simulations. The statistical motifs compact model is used to bridge between device technology and circuit design. The statistical variability information is transferred into motifs' compact model in order to facilitate variation-aware circuit designs. The uniform motif compact model extraction is performed by a novel two-step evolutionary algorithm. The proposed extraction method overcomes the drawbacks of conventional extraction methods of poor convergence without good initial conditions and the difficulty of simulating multi-objective optimisations. After uniform motif compact models are obtained, the statistical variability information is injected into these compact models to generate the final motif statistical variability model. The thesis also considers the influence of different choices of motif for each device on circuit performance and its statistical variability characteristics. A set of basic logic gates is constructed using different motif choices. Results show that circuit performance and variability mitigation can benefit from specific motif permutations. A multi-stage optimisation methodology is introduced, in which the processes of optimisation are divided into several stages. Benchmark circuits show the efficacy of the proposed methods. The results presented in this thesis indicate that the proposed methods are able to provide circuit performance improvements and are able to create circuits that are more robust against variability

    Analog layout design automation: ILP-based analog routers

    Get PDF
    The shrinking design window and high parasitic sensitivity in the advanced technology have imposed special challenges on the analog and radio frequency (RF) integrated circuit design. In this thesis, we propose a new methodology to address such a deficiency based on integer linear programming (ILP) but without compromising the capability of handling any special constraints for the analog routing problems. Distinct from the conventional methods, our algorithm utilizes adaptive resolutions for various routing regions. For a more congested region, a routing grid with higher resolution is employed, whereas a lower-resolution grid is adopted to a less crowded routing region. Moreover, we strengthen its speciality in handling interconnect width control so as to route the electrical nets based on analog constraints while considering proper interconnect width to address the acute interconnect parasitics, mismatch minimization, and electromigration effects simultaneously. In addition, to tackle the performance degradation due to layout dependent effects (LDEs) and take advantage of optical proximity correction (OPC) for resolution enhancement of subwavelength lithography, in this thesis we have also proposed an innovative LDE-aware analog layout migration scheme, which is equipped with our special routing methodology. The LDE constraints are first identified with aid of a special sensitivity analysis and then satisfied during the layout migration process. Afterwards the electrical nets are routed by an extended OPC-inclusive ILP-based analog router to improve the final layout image fidelity while the routability and analog constraints are respected in the meantime. The experimental results demonstrate the effectiveness and efficiency of our proposed methods in terms of both circuit performance and image quality compared to the previous works

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations
    corecore