55 research outputs found
System level performance and yield optimisation for analogue integrated circuits
Advances in silicon technology over the last decade have led to increased integration of analogue and digital functional blocks onto the same single chip. In such a mixed signal environment, the analogue circuits must use the same process technology as their digital neighbours. With reducing transistor sizes, the impact of process variations on analogue design has become prominent and can lead to circuit performance falling below specification and hence reducing the yield.This thesis explores the methodology and algorithms for an analogue integrated circuit automation tool that optimizes performance and yield. The trade-offs between performance and yield are analysed using a combination of an evolutionary algorithm and Monte Carlo simulation. Through the integration of yield parameter into the optimisation process, the trade off between the performance functions can be better treated that able to produce a higher yield. The results obtained from the performance and variation exploration are modelled behaviourally using a Verilog-A language. The model has been verified with transistor level simulation and a silicon prototype.For a large analogue system, the circuit is commonly broken down into its constituent sub-blocks, a process known as hierarchical design. The use of hierarchical-based design and optimisation simplifies the design task and accelerates the design flow by encouraging design reuse.A new approach for system level yield optimisation using a hierarchical-based design is proposed and developed. The approach combines Multi-Objective Bottom Up (MUBU) modelling technique to model the circuit performance and variation and Top Down Constraint Design (TDCD) technique for the complete system level design. The proposed method has been used to design a 7th order low pass filter and a charge pump phase locked loop system. The results have been verified with transistor level simulations and suggest that an accurate system level performance and yield prediction can be achieved with the proposed methodology
Recommended from our members
Adjoint-based geometry optimisation with applications to automotive fuel injector nozzles
Methods of Computational Fluid Dynamics (CFD) have matured, over the last 30 years, to a stage where it is possible to gain substantial insight into fluid flow processes of engineering relevance. However, the motives of fluid dynamic engineers typically go well beyond the level of improved understanding, to the pragmatic aim of improving the performance of the engineering systems in consideration. It is in recognition of these circumstances that the present thesis investigates the use of automated design optimisation methodologies in order to extend the power of CFD as an engineering design tool. Optimum design problems require the merit or performance of designs to be measured explicitly in terms of an objective function. At the same time, it may be required that one or more constraints should be satisfied. To describe allowable variations in design, shape parameterisation using basic geometric entities such as straight lines and arcs is employed. Taking advantage of previous experience in the research group concerning cavitating flows, a fully automated method for nozzle design/optimisation was developed. The optimisation is performed by means of discharge coefficient (Cd) maximisation. The objective is to design nozzle hole shapes that maximise the nozzle Cd for a given basic nozzle geometry (i.e. needle and sac profile) and reduce or even eliminate the negative pressure region formed at the entry of the injection hole. The deterministic optimisation model was developed and implemented in the in-house RANS CFD code to provide nozzle shapes with pre-defined flow/performance characteristics. The required gradients are calculated using the continuous adjoint technique. A parameterisation scheme, suitable for nozzle design, was developed. The localised region around the hole inlet, where cavitation inception appears, is parameterised and modified during the optimisation procedure, while the rest of the nozzle remains unaffected. The parameters modifying the geometry are the radius of curvature and the diameter of the hole inlet or exit as well as the relative needle seat angle. The steepest descent method has been used to drive the calculated gradients to zero and update the design parameters. For the validation of the model two representative inverse design cases have been selected. Studies showing the behaviour of the model according to different numerical and optimisation parameters are also presented. For the purpose of optimising the geometries, a cost function intended to maximise the discharge coefficient was defined. At the same time it serves the purpose of restructuring geometries which have controlled or eliminated cavitation inception in the hole entrance. This is identified in the steady-state mode by reduction of the volume of negative relative pressure appearing in the hole entrance. Results of cavitation control on some representative nozzle geometries show significant benefits gained by the use of the developed method. This is mainly because the developed model performs optimisation on numerous parametric combinations automatically. Results showed that, by using the proposed method, geometries with larger Cd values can be achieved and the cavitation inception can, in some cases, be completely eliminated. Cases where all the parameters were combined for redesign the geometry required less modification to predict larger Cd values than cases where each parameter was modified individually. This is an important result since manufacturers are seeking improvement in the performance of products resulting from the least geometry
modifications
Numerical and Evolutionary Optimization 2020
This book was established after the 8th International Workshop on Numerical and Evolutionary Optimization (NEO), representing a collection of papers on the intersection of the two research areas covered at this workshop: numerical optimization and evolutionary search techniques. While focusing on the design of fast and reliable methods lying across these two paradigms, the resulting techniques are strongly applicable to a broad class of real-world problems, such as pattern recognition, routing, energy, lines of production, prediction, and modeling, among others. This volume is intended to serve as a useful reference for mathematicians, engineers, and computer scientists to explore current issues and solutions emerging from these mathematical and computational methods and their applications
Engineering Education and Research Using MATLAB
MATLAB is a software package used primarily in the field of engineering for signal processing, numerical data analysis, modeling, programming, simulation, and computer graphic visualization. In the last few years, it has become widely accepted as an efficient tool, and, therefore, its use has significantly increased in scientific communities and academic institutions. This book consists of 20 chapters presenting research works using MATLAB tools. Chapters include techniques for programming and developing Graphical User Interfaces (GUIs), dynamic systems, electric machines, signal and image processing, power electronics, mixed signal circuits, genetic programming, digital watermarking, control systems, time-series regression modeling, and artificial neural networks
Inter-chip communications in an analogue neural network utilising frequency division multiplexing
As advances have been made in semiconductor processing technology, the number of transistors on a chip has increased out of step with the number of input/output pins, which has introduced a communications ’bottle-neck’ in the design of computer architectures. This is a major issue in the hardware design of parallel structures implemented in either digital or analogue VLSI, and is particularly relevant to the design of neural networks which need to be highly interconnected.
This work reviews hardware implementations of neural networks, with an emphasis on analogue implementations, and proposes a new method for overcoming connectivity constraints, by the use of Frequency Division Multiplexing (FDM) for the inter-chip communications. In this FDM scheme, multiple analogue signals are transmitted between chips on a single wire by modulating them at different frequencies.
The main theoretical work examines the number of signals which can be packed into an FDM channel, depending on the quality factors of the filters used for the demultiplexing, and a fractional overlap parameter which was defined to take into account the inevitable overlapping of filter frequency responses. It is seen that by increasing the amount of permissible overlap, it is possible to communicate a larger number of signals in a given bandwidth.
Alternatively, the quality factors of the filters can be reduced, which is advantageous for hardware implementation. Therefore, it was found necessary to determine the amount of overlap which might be permissible in a neural network implementation utilising FDM communications.
A software simulator is described, which was designed to test the effects of overlap on Multilayer Perceptron neural networks. Results are presented for networks trained with the backpropagation algorithm, and with the alternative weight perturbation algorithm. These were carried out using both floating point and quantised weights to examine the combined effects of overlap and weight quantisation. It is shown using examples of classification problems, that the neural network learning is indeed highly tolerent to overlap, such that the effect on performance (i.e. on convergence or generalisation) is negligible for fractional overlaps of up to 30%, and some tolerence is achieved for higher overlaps, before failure eventually occurs. The results of the simulations are followed up by a closer examination of the mechanism of network failure.
The last section of the thesis investigates the VLSI implementation of the FDM scheme, and proposes the use of the operational transconductance amplifier (OTA) as a building block for implementation of the FDM circuitry in analogue VLSI.
A full custom VLSI design of an OTA is presented, which was designed and fabricated through Eurochip, using HSPICE/Mentor Graphics CAD tools and the Mietec 2.4µ CMOS process. A VLSI architecture for inter-chip FDM is also proposed, using adaptive tuning of the OTA-C filters and oscillators.This forms the basis for a program of further work towards the VLSI realisation of inter-chip FDM, which is outlined in the conclusions chapter
FEEDFORWARD ARTIFICIAL NEURAL NETWORK DESIGN UTILISING SUBTHRESHOLD MODE CMOS DEVICES
This thesis reviews various previously reported techniques for simulating artificial
neural networks and investigates the design of fully-connected feedforward networks
based on MOS transistors operating in the subthreshold mode of conduction as they are
suitable for performing compact, low power, implantable pattern recognition systems.
The principal objective is to demonstrate that the transfer characteristic of the devices
can be fully exploited to design basic processing modules which overcome the linearity
range, weight resolution, processing speed, noise and mismatch of components
problems associated with weak inversion conduction, and so be used to implement
networks which can be trained to perform practical tasks.
A new four-quadrant analogue multiplier, one of the most important cells in the
design of artificial neural networks, is developed. Analytical as well as simulation
results suggest that the new scheme can efficiently be used to emulate both the synaptic
and thresholding functions. To complement this thresholding-synapse, a novel
current-to-voltage converter is also introduced. The characteristics of the well known
sample-and-hold circuit as a weight memory scheme are analytically derived and
simulation results suggest that a dummy compensated technique is required to obtain the
required minimum of 8 bits weight resolution. Performance of the combined load and
thresholding-synapse arrangement as well as an on-chip update/refresh mechanism are
analytically evaluated and simulation studies on the Exclusive OR network as a
benchmark problem are provided and indicate a useful level of functionality.
Experimental results on the Exclusive OR network and a 'QRS' complex detector
based on a 10:6:3 multilayer perceptron are also presented and demonstrate the potential
of the proposed design techniques in emulating feedforward neural networks
Inter-chip communications in an analogue neural network utilising frequency division multiplexing
As advances have been made in semiconductor processing technology, the number of transistors on a chip has increased out of step with the number of input/output pins, which has introduced a communications ’bottle-neck’ in the design of computer architectures. This is a major issue in the hardware design of parallel structures implemented in either digital or analogue VLSI, and is particularly relevant to the design of neural networks which need to be highly interconnected.
This work reviews hardware implementations of neural networks, with an emphasis on analogue implementations, and proposes a new method for overcoming connectivity constraints, by the use of Frequency Division Multiplexing (FDM) for the inter-chip communications. In this FDM scheme, multiple analogue signals are transmitted between chips on a single wire by modulating them at different frequencies.
The main theoretical work examines the number of signals which can be packed into an FDM channel, depending on the quality factors of the filters used for the demultiplexing, and a fractional overlap parameter which was defined to take into account the inevitable overlapping of filter frequency responses. It is seen that by increasing the amount of permissible overlap, it is possible to communicate a larger number of signals in a given bandwidth.
Alternatively, the quality factors of the filters can be reduced, which is advantageous for hardware implementation. Therefore, it was found necessary to determine the amount of overlap which might be permissible in a neural network implementation utilising FDM communications.
A software simulator is described, which was designed to test the effects of overlap on Multilayer Perceptron neural networks. Results are presented for networks trained with the backpropagation algorithm, and with the alternative weight perturbation algorithm. These were carried out using both floating point and quantised weights to examine the combined effects of overlap and weight quantisation. It is shown using examples of classification problems, that the neural network learning is indeed highly tolerent to overlap, such that the effect on performance (i.e. on convergence or generalisation) is negligible for fractional overlaps of up to 30%, and some tolerence is achieved for higher overlaps, before failure eventually occurs. The results of the simulations are followed up by a closer examination of the mechanism of network failure.
The last section of the thesis investigates the VLSI implementation of the FDM scheme, and proposes the use of the operational transconductance amplifier (OTA) as a building block for implementation of the FDM circuitry in analogue VLSI.
A full custom VLSI design of an OTA is presented, which was designed and fabricated through Eurochip, using HSPICE/Mentor Graphics CAD tools and the Mietec 2.4µ CMOS process. A VLSI architecture for inter-chip FDM is also proposed, using adaptive tuning of the OTA-C filters and oscillators.This forms the basis for a program of further work towards the VLSI realisation of inter-chip FDM, which is outlined in the conclusions chapter
System identification for crash victim simulation
The work presented in this thesis concerns the identification of vehicle occupant
models. Mathematical models of the vehicle occupant are used in the preliminary
design and development phase of vehicle design. In the design phase, the model is
used to guide the decision on restraint system feasibility. In the development phase
the model is used to suggest solutions to problems associated with the dummy
trajectory or restraint system performance.
Current methods used -to determine such models involve independent component
testing. The conditions under which the components are tested are often not typical
of a crash test, hence iterations of the computer model are needed to successively
improve model and test correlation.
In order to address these problems which cause inaccurate specification of the
mathematical models, an alternative method of data set assembly for crash victim
models is suggested. This alternative method is based on the techniques of system
identification which allow unknown system parameters to be determined from
experimental input/output data.
Initially the viability of using system identification techniques to develop a valid
mathematical model of the vehicle occupant and restraint system was investigated.
This initial study used input and output measurementsfr om computer simulations of
the occupant in frontal impact, as source data for the identification. Effects of
simulated disturbances (noise corrupted output signals) and the effects of simplified
model structure on the identification are also investigated. Several methods for
analysing the likely errors in the identified parameters are defined and discussed in
this simulation study.
Results relating to the identification of seat contact and seat belt characteristics from
physical tests are also presented and these are interpreted in light of the simulation
results
Variation-aware behavioural modelling using support vector machines and affine arithmetic
AGIAS Generalised Interval Arithmetic Simulator (AGIAS) is a specialised simulator which
uses affine arithmetic to model parameter variations. It uses a specialised root-finding
algorithm to simulate analogue circuits with parameter variations in one single simulation
run. This is a significant speed-up compared to the multiple runs needed by industrialised
solutions such as Monte-Carlo (MC) or Worst-Case Analysis (WCA). Currently, AGIAS
can simulate analogue circuits only under very specific conditions. In many cases, circuits
can only be simulated for certain operating points. If the circuits is to be evaluated in other
operating points, the solver becomes numerically unstable and simulation fails. In these
cases, interval widths approach infinity.
Behavioural modelling of analogue circuits was introduced by researchers working around
limitations of simulators. Most early approaches require expert knowledge and insight into
the circuit which is modelled. In recent years, Machine Learning techniques for automatic
generation of behavioural models have made their way into the field. This thesis combines
Machine Learning techniques with affine arithmetic to include the effects of parameter
variations into models.
Support Vector Machines (SVMs) train two sets of parameters: one slope parameter
and one offset parameter. These parameters are replaced by affine forms. Using these two
parameters allows affine SVMs to model effects of parameter variations with varying widths.
Training requires additional information about maximum and minimum values in addition to
the nominal values in the data set. Based on these changes, affine ε Support Vector Machine
(ε̂SVR) and ν Support Vector Machine (ν̂SVR) algorithms for regression are presented. To
train the affine parameters directly and profit from the Sequential Minimal Optimisation
algorithm (SMO)’s selectivity, the SMO is extended to handle the new, larger optimisation
problems.
The new affine SVMs are tested on analogue circuits that have been chosen based on
whether they could be simulated with AGIAS and how strongly non-linear their characteristic
function is
- …