11,025 research outputs found
Lagrangian Time Series Models for Ocean Surface Drifter Trajectories
This paper proposes stochastic models for the analysis of ocean surface
trajectories obtained from freely-drifting satellite-tracked instruments. The
proposed time series models are used to summarise large multivariate datasets
and infer important physical parameters of inertial oscillations and other
ocean processes. Nonstationary time series methods are employed to account for
the spatiotemporal variability of each trajectory. Because the datasets are
large, we construct computationally efficient methods through the use of
frequency-domain modelling and estimation, with the data expressed as
complex-valued time series. We detail how practical issues related to sampling
and model misspecification may be addressed using semi-parametric techniques
for time series, and we demonstrate the effectiveness of our stochastic models
through application to both real-world data and to numerical model output.Comment: 21 pages, 10 figure
Design and modelling of variability tolerant on-chip communication structures for future high performance system on chip designs
The incessant technology scaling has enabled the integration of functionally complex System-on-Chip (SoC) designs with a large number of heterogeneous systems on a single chip. The processing elements on these chips are integrated through on-chip communication structures which provide the infrastructure necessary for the exchange of data and control signals, while meeting the strenuous physical and design constraints. The use of vast amounts of on chip communications will be central to future designs where variability is an inherent characteristic. For this reason, in this thesis we investigate the performance and variability tolerance of typical on-chip communication structures. Understanding of the relationship between variability and communication is paramount for the designers; i.e. to devise new methods and techniques for designing performance and power efficient communication circuits in the forefront of challenges presented by deep sub-micron (DSM) technologies.
The initial part of this work investigates the impact of device variability due to Random Dopant Fluctuations (RDF) on the timing characteristics of basic communication elements. The characterization data so obtained can be used to estimate the performance and failure probability of simple links through the methodology proposed in this work. For the Statistical Static Timing Analysis (SSTA) of larger circuits, a method for accurate estimation of the probability density functions of different circuit parameters is proposed. Moreover, its significance on pipelined circuits is highlighted. Power and area are one of the most important design metrics for any integrated circuit (IC) design. This thesis emphasises the consideration of communication reliability while optimizing for power and area. A methodology has been proposed for the simultaneous optimization of performance, area, power and delay variability for a repeater inserted interconnect. Similarly for multi-bit parallel links, bandwidth driven optimizations have also been performed. Power and area efficient semi-serial links, less vulnerable to delay variations than the corresponding fully parallel links are introduced. Furthermore, due to technology scaling, the coupling noise between the link lines has become an important issue. With ever decreasing supply voltages, and the corresponding reduction in noise margins, severe challenges are introduced for performing timing verification in the presence of variability. For this reason an accurate model for crosstalk noise in an interconnection as a function of time and skew is introduced in this work. This model can be used for the identification of skew condition that gives maximum delay noise, and also for efficient design verification
Dynamic fine-grain body biasing of caches with latency and leakage 3T1D-based monitors
In this paper, we propose a dynamically tunable fine-grain body biasing mechanism to reduce active & standby leakage power in caches under process variations.Preprin
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
Public expenditure and growth
Given that public spending will have a positive impact on GDP if the benefits exceed the marginal cost of public funds, the present paper deals with measuring costs and benefits of public spending. The paper discusses one cost seldom considered in the literature and in policy debates, namely, the volatility derived from additional public spending. The paper identifies a relationship between public spending volatility and consumption volatility, which implies a direct welfare loss to society. This loss is substantial in developing countries, estimated at 8 percent of consumption. If welfare losses due to volatility are this sizeable, then measuring the benefits of public spending is critical. Gauging benefits based on macro aggregate data requires three caveats: a) considering of the impact of the funding (taxation) required for the additional public spending; b) differentiating between investment and capital formation; c) allowing for heterogeneous response of output to different types of capital and differences in network development. It is essential to go beyond country-specificity to project-level evaluation of the benefits and costs of public projects. From the micro viewpoint, the rate of return of a project must exceed the marginal cost of public funds, determined by tax levels and structure. Credible evaluations require microeconomic evidence and careful specification of counterfactuals. On this, the impact evaluation literature and methods play a critical role. From individual project evaluation, the analyst must contemplate the general equilibrium impacts. In general, the paper advocates for project evaluation as a central piece of any development platform. By increasing the efficiency of public spending, the government can permanently increase the rate of productivity growth and, hence, affect the growth rate of GDP.Public Sector Economics&Finance,,Economic Theory&Research,Debt Markets,Public Sector Expenditure Analysis&Management
Parametric Yield of VLSI Systems under Variability: Analysis and Design Solutions
Variability has become one of the vital challenges that the
designers of integrated circuits encounter. variability becomes
increasingly important. Imperfect manufacturing process manifest
itself as variations in the design parameters. These variations
and those in the operating environment of VLSI circuits result in
unexpected changes in the timing, power, and reliability of the
circuits. With scaling transistor dimensions, process and
environmental variations become significantly important in the
modern VLSI design. A smaller feature size means that the physical
characteristics of a device are more prone to these
unaccounted-for changes. To achieve a robust design, the random
and systematic fluctuations in the manufacturing process and the
variations in the environmental parameters should be analyzed and
the impact on the parametric yield should be addressed.
This thesis studies the challenges and comprises solutions for
designing robust VLSI systems in the presence of variations.
Initially, to get some insight into the system design under
variability, the parametric yield is examined for a small circuit.
Understanding the impact of variations on the yield at the circuit
level is vital to accurately estimate and optimize the yield at
the system granularity. Motivated by the observations and results,
found at the circuit level, statistical analyses are performed,
and solutions are proposed, at the system level of abstraction, to
reduce the impact of the variations and increase the parametric
yield.
At the circuit level, the impact of the supply and threshold
voltage variations on the parametric yield is discussed. Here, a
design centering methodology is proposed to maximize the
parametric yield and optimize the power-performance trade-off
under variations. In addition, the scaling trend in the yield loss
is studied. Also, some considerations for design centering in the
current and future CMOS technologies are explored.
The investigation, at the circuit level, suggests that the
operating temperature significantly affects the parametric yield.
In addition, the yield is very sensitive to the magnitude of the
variations in supply and threshold voltage. Therefore, the spatial
variations in process and environmental variations make it
necessary to analyze the yield at a higher granularity. Here,
temperature and voltage variations are mapped across the chip to
accurately estimate the yield loss at the system level.
At the system level, initially the impact of process-induced
temperature variations on the power grid design is analyzed. Also,
an efficient verification method is provided that ensures the
robustness of the power grid in the presence of variations. Then,
a statistical analysis of the timing yield is conducted, by taking
into account both the process and environmental variations. By
considering the statistical profile of the temperature and supply
voltage, the process variations are mapped to the delay variations
across a die. This ensures an accurate estimation of the timing
yield. In addition, a method is proposed to accurately estimate
the power yield considering process-induced temperature and supply
voltage variations. This helps check the robustness of the
circuits early in the design process.
Lastly, design solutions are presented to reduce the power
consumption and increase the timing yield under the variations. In
the first solution, a guideline for floorplaning optimization in
the presence of temperature variations is offered. Non-uniformity
in the thermal profiles of integrated circuits is an issue that
impacts the parametric yield and threatens chip reliability.
Therefore, the correlation between the total power consumption and
the temperature variations across a chip is examined. As a result,
floorplanning guidelines are proposed that uses the correlation to
efficiently optimize the chip's total power and takes into account
the thermal uniformity.
The second design solution provides an optimization methodology
for assigning the power supply pads across the chip for maximizing
the timing yield. A mixed-integer nonlinear programming (MINLP)
optimization problem, subject to voltage drop and current
constraint, is efficiently solved to find the optimum number and
location of the pads
Regular cell design approach considering lithography-induced process variations
The deployment delays for EUVL, forces IC design to continue using 193nm wavelength lithography with innovative and costly techniques in order to faithfully print sub-wavelength features and combat lithography induced process variations. The effect of the lithography gap in current and upcoming technologies is to cause severe distortions due to optical diffraction in the printed patterns and thus degrading manufacturing yield. Therefore, a paradigm shift in layout design is mandatory towards more regular litho-friendly cell designs in order to improve line pattern resolution. However, it is still unclear the amount of layout regularity that can be introduced and how to measure the benefits and weaknesses of regular layouts.
This dissertation is focused on searching the degree of layout regularity necessary to combat lithography variability and outperform the layout quality of a design. The main contributions that have been addressed to accomplish this objective are: (1) the definition of several layout design guidelines to mitigate lithography variability; (2) the proposal of a parametric yield estimation model to evaluate the lithography impact on layout design; (3) the development of a global Layout Quality Metric (LQM) including a Regularity Metric (RM) to capture the degree of layout regularity of a layout implementation and; (4) the creation of different layout architectures exploiting the benefits of layout regularity to outperform line-pattern resolution, referred as Adaptive Lithography Aware Regular Cell Designs (ALARCs).
The first part of this thesis provides several regular layout design guidelines derived from lithography simulations so that several important lithography related variation sources are minimized. Moreover, a design level methodology, referred as gate biasing, is proposed to overcome systematic layout dependent variations, across-field variations and the non-rectilinear gate effect (NRG) applied to regular fabrics by properly configuring the drawn transistor channel length.
The second part of this dissertation proposes a lithography yield estimation model to predict the amount of lithography distortion expected in a printed layout due to lithography hotspots with a reduced set of lithography simulations. An efficient lithography hotspot framework to identify the different layout pattern configurations, simplify them to ease the pattern analysis and classify them according to the lithography degradation predicted using lithography simulations is presented. The yield model is calibrated with delay measurements of a reduced set of identical test circuits implemented in a CMOS 40nm technology and thus actual silicon data is utilized to obtain a more realistic yield estimation.
The third part of this thesis presents a configurable Layout Quality Metric (LQM) that considering several layout aspects provides a global evaluation of a layout design with a single score. The LQM can be leveraged by assigning different weights to each evaluation metric or by modifying the parameters under analysis. The LQM is here configured following two different set of partial metrics. Note that the LQM provides a regularity metric (RM) in order to capture the degree of layout regularity applied in a layout design.
Lastly, this thesis presents different ALARC designs for a 40nm technology using different degrees of layout regularity and different area overheads. The quality of the gridded regular templates is demonstrated by automatically creating a library containing 266 cells including combinational and sequential cells and synthesizing several ITC'99 benchmark circuits. Note that the regular cell libraries only presents a 9\% area penalty compared to the 2D standard cell designs used for comparison and thus providing area competitive designs. The layout evaluation of benchmark circuits considering the LQM shows that regular layouts can outperform other 2D standard cell designs depending on the layout implementation.Los continuos retrasos en la implementaciĂłn de la EUVL, fuerzan que el diseño de IC se realice mediante litografĂa de longitud de onda de 193 nm con innovadoras y costosas tĂ©cnicas para poder combatir variaciones de proceso de litografĂa. La gran diferencia entre la longitud de onda y el tamaño de los patrones causa severas distorsiones debido a la difracciĂłn Ăłptica en los patrones impresos y por lo tanto degradando el yield. En consecuencia, es necesario realizar un cambio en el diseño de layouts hacia diseños más regulares para poder mejorar la resoluciĂłn de los patrones. Sin embargo, todavĂa no está claro el grado de regularidad que se debe introducir y como medir los beneficios y los perjuicios de los diseños regulares. El objetivo de esta tesis es buscar el grado de regularidad necesario para combatir las variaciones de litografĂa y mejorar la calidad del layout de un diseño. Las principales contribuciones para conseguirlo son: (1) la definiciĂłn de diversas reglas de diseño de layout para mitigar las variaciones de litografĂa; (2) la propuesta de un modelo para estimar el yield paramĂ©trico y asĂ evaluar el impacto de la litografĂa en el diseño de layout; (3) el diseño de una mĂ©trica para analizar la calidad de un layout (LQM) incluyendo una mĂ©trica para capturar el grado de regularidad de un diseño (RM) y; (4) la creaciĂłn de diferentes tipos de layout explotando los beneficios de la regularidad, referidos como Adaptative Lithography Aware Regular Cell Designs (ALARCs). La primera parte de la tesis, propone las diversas reglas de diseño para layouts regulares derivadas de simulaciones de litografĂa de tal manera que las fuentes de variaciĂłn de litografĂa son minimizadas. Además, se propone una metodologĂa de diseño para layouts regulares, referida como "gate biasing" para contrarrestar las variaciones sistemáticas dependientes del layout, las variaciones en la ventana de proceso del sistema litográfico y el efecto de puerta no rectilĂnea para configurar la longitud del canal del transistor correctamente. La segunda parte de la tesis, detalla el modelo de estimaciĂłn del yield de litografĂa para predecir mediante un nĂşmero reducido de simulaciones de litografĂa la cantidad de distorsiĂłn que se espera en un layout impreso debida a "hotspots". Se propone una eficiente metodologĂa que identifica los distintos patrones de un layout, los simplifica para facilitar el análisis de los patrones y los clasifica en relaciĂłn a la degradaciĂłn predecida mediante simulaciones de litografĂa. El modelo de yield se calibra utilizando medidas de tiempo de un nĂşmero reducido de idĂ©nticos circuitos de test implementados en una tecnologĂa CMOS de 40nm y de esta manera, se utilizan datos de silicio para obtener una estimaciĂłn realista del yield. La tercera parte de este trabajo, presenta una mĂ©trica para medir la calidad del layout (LQM), que considera diversos aspectos para dar una evaluaciĂłn global de un diseño mediante un Ăşnico valor. La LQM puede ajustarse mediante la asignaciĂłn de diferentes pesos para cada mĂ©trica de evaluaciĂłn o modificando los parámetros analizados. La LQM se configura mediante dos conjuntos de medidas diferentes. Además, Ă©sta incluye una mĂ©trica de regularidad (RM) para capturar el grado de regularidad que se aplica en un diseño. Finalmente, esta disertaciĂłn presenta los distintos diseños ALARC para una tecnologĂa de 40nm utilizando diversos grados de regularidad y diferentes impactos en área. La calidad de estos diseños se demuestra creando automáticamente una librerĂa de 266 celdas incluyendo celdas combinacionales y secuenciales y, sintetizando diversos circuitos ITC'99. Las librerĂas regulares solo presentan un 9% de impacto en área comparado con diseños de celdas estándar 2D y por tanto proponiendo diseños competitivos en área. La evaluaciĂłn de los circuitos considerando la LQM muestra que los diseños regulares pueden mejorar otros diseños 2D dependiendo de la implementaciĂłn del layout
- …