248 research outputs found

    System level performance and yield optimisation for analogue integrated circuits

    No full text
    Advances in silicon technology over the last decade have led to increased integration of analogue and digital functional blocks onto the same single chip. In such a mixed signal environment, the analogue circuits must use the same process technology as their digital neighbours. With reducing transistor sizes, the impact of process variations on analogue design has become prominent and can lead to circuit performance falling below specification and hence reducing the yield.This thesis explores the methodology and algorithms for an analogue integrated circuit automation tool that optimizes performance and yield. The trade-offs between performance and yield are analysed using a combination of an evolutionary algorithm and Monte Carlo simulation. Through the integration of yield parameter into the optimisation process, the trade off between the performance functions can be better treated that able to produce a higher yield. The results obtained from the performance and variation exploration are modelled behaviourally using a Verilog-A language. The model has been verified with transistor level simulation and a silicon prototype.For a large analogue system, the circuit is commonly broken down into its constituent sub-blocks, a process known as hierarchical design. The use of hierarchical-based design and optimisation simplifies the design task and accelerates the design flow by encouraging design reuse.A new approach for system level yield optimisation using a hierarchical-based design is proposed and developed. The approach combines Multi-Objective Bottom Up (MUBU) modelling technique to model the circuit performance and variation and Top Down Constraint Design (TDCD) technique for the complete system level design. The proposed method has been used to design a 7th order low pass filter and a charge pump phase locked loop system. The results have been verified with transistor level simulations and suggest that an accurate system level performance and yield prediction can be achieved with the proposed methodology

    Invited Review: Recent developments in vibration control of building and bridge structures

    Get PDF
    This paper presents a state-of-the-art review of recent articles published on active, passive, semi-active and hybrid vibration control systems for structures under dynamic loadings primarily since 2013. Active control systems include active mass dampers, active tuned mass dampers, distributed mass dampers, and active tendon control. Passive systems include tuned mass dampers (TMD), particle TMD, tuned liquid particle damper, tuned liquid column damper (TLCD), eddy-current TMD, tuned mass generator, tuned-inerter dampers, magnetic negative stiffness device, resetting passive stiffness damper, re-entering shape memory alloy damper, viscous wall dampers, viscoelastic dampers, and friction dampers. Semi-active systems include tuned liquid damper with floating roof, resettable variable stiffness TMD, variable friction dampers, semi-active TMD, magnetorheological dampers, leverage-type stiffness controllable mass damper, semi-active friction tendon. Hybrid systems include shape memory alloys-liquid column damper, shape memory alloy-based damper, and TMD-high damping rubber

    Risk Management using Model Predictive Control

    Get PDF
    Forward planning and risk management are crucial for the success of any system or business dealing with the uncertainties of the real world. Previous approaches have largely assumed that the future will be similar to the past, or used simple forecasting techniques based on ad-hoc models. Improving solutions requires better projection of future events, and necessitates robust forward planning techniques that consider forecasting inaccuracies. This work advocates risk management through optimal control theory, and proposes several techniques to combine it with time-series forecasting. Focusing on applications in foreign exchange (FX) and battery energy storage systems (BESS), the contributions of this thesis are three-fold. First, a short-term risk management system for FX dealers is formulated as a stochastic model predictive control (SMPC) problem in which the optimal risk-cost profiles are obtained through dynamic control of the dealers’ positions on the spot market. Second, grammatical evolution (GE) is used to automate non-linear time-series model selection, validation, and forecasting. Third, a novel measure for evaluating forecasting models, as a part of the predictive model in finite horizon optimal control applications, is proposed. Using both synthetic and historical data, the proposed techniques were validated and benchmarked. It was shown that the stochastic FX risk management system exhibits better risk management on a risk-cost Pareto frontier compared to rule-based hedging strategies, with up to 44.7% lower cost for the same level of risk. Similarly, for a real-world BESS application, it was demonstrated that the GE optimised forecasting models outperformed other prediction models by at least 9%, improving the overall peak shaving capacity of the system to 57.6%

    Modelling discrepancy in Bayesian calibration of reservoir models

    Get PDF
    Simulation models of physical systems such as oil field reservoirs are subject to numerous uncertainties such as observation errors and inaccurate initial and boundary conditions. However, after accounting for these uncertainties, it is usually observed that the mismatch between the simulator output and the observations remains and the model is still inadequate. This incapability of computer models to reproduce the real-life processes is referred to as model inadequacy. This thesis presents a comprehensive framework for modelling discrepancy in the Bayesian calibration and probabilistic forecasting of reservoir models. The framework efficiently implements data-driven approaches to handle uncertainty caused by ignoring the modelling discrepancy in reservoir predictions using two major hierarchical strategies, parametric and non-parametric hierarchical models. The central focus of this thesis is on an appropriate way of modelling discrepancy and the importance of the model selection in controlling overfitting rather than different solutions to different noise models. The thesis employs a model selection code to obtain the best candidate solutions to the form of non-parametric error models. This enables us to, first, interpolate the error in history period and, second, propagate it towards unseen data (i.e. error generalisation). The error models constructed by inferring parameters of selected models can predict the response variable (e.g. oil rate) at any point in input space (e.g. time) with corresponding generalisation uncertainty. In the real field applications, the error models reliably track down the uncertainty regardless of the type of the sampling method and achieve a better model prediction score compared to the models that ignore discrepancy. All the case studies confirm the enhancement of field variables prediction when the discrepancy is modelled. As for the model parameters, hierarchical error models render less global bias concerning the reference case. However, in the considered case studies, the evidence for better prediction of each of the model parameters by error modelling is inconclusive

    Risk Management using Model Predictive Control

    Get PDF
    Forward planning and risk management are crucial for the success of any system or business dealing with the uncertainties of the real world. Previous approaches have largely assumed that the future will be similar to the past, or used simple forecasting techniques based on ad-hoc models. Improving solutions requires better projection of future events, and necessitates robust forward planning techniques that consider forecasting inaccuracies. This work advocates risk management through optimal control theory, and proposes several techniques to combine it with time-series forecasting. Focusing on applications in foreign exchange (FX) and battery energy storage systems (BESS), the contributions of this thesis are three-fold. First, a short-term risk management system for FX dealers is formulated as a stochastic model predictive control (SMPC) problem in which the optimal risk-cost profiles are obtained through dynamic control of the dealers’ positions on the spot market. Second, grammatical evolution (GE) is used to automate non-linear time-series model selection, validation, and forecasting. Third, a novel measure for evaluating forecasting models, as a part of the predictive model in finite horizon optimal control applications, is proposed. Using both synthetic and historical data, the proposed techniques were validated and benchmarked. It was shown that the stochastic FX risk management system exhibits better risk management on a risk-cost Pareto frontier compared to rule-based hedging strategies, with up to 44.7% lower cost for the same level of risk. Similarly, for a real-world BESS application, it was demonstrated that the GE optimised forecasting models outperformed other prediction models by at least 9%, improving the overall peak shaving capacity of the system to 57.6%

    Improving the convergence rate of seismic history matching with a proxy derived method to aid stochastic sampling

    Get PDF
    History matching is a very important activity during the continued development and management of petroleum reservoirs. Time-lapse (4D) seismic data provide information on the dynamics of fluids in reservoirs, relating variations of seismic signal to saturation and pressure changes. This information can be integrated with history matching to improve convergence towards a simulation model that predicts available data. The main aim of this thesis is to develop a method to speed up the convergence rate of assisted seismic history matching using proxy derived gradient method. Stochastic inversion algorithms often rely on simple assumptions for selecting new models by random processes. In this work, we improve the way that such approaches learn about the system they are searching and thus operate more efficiently. To this end, a new method has been developed called NA with Proxy derived Gradients (NAPG). To improve convergence, we use a proxy model to understand how parameters control the misfit and then use a global stochastic method with these sensitivities to optimise the search of the parameter space. This leads to an improved set of final reservoir models. These in turn can be used more effectively in reservoir management decisions. To validate the proposed approach, we applied the new approach on a number of analytical functions and synthetic cases. In addition, we demonstrate the proposed method by applying it to the UKCS Schiehallion field. The results show that the new method speeds up the rate of convergence by a factor of two to three generally. The performance of NAPG is much improved by updating the regression equation coefficients instead of keeping it fixed. In addition, we found that the initial number of models to start NAPG or NA could be reduced by using Experimental Design instead of using random initialization. Ultimately, with all of these approaches combined, the number of models required to find a good match reduced by an order of magnitude. We have investigated the criteria for stopping the SHM loop, particularly the use of a proxy model to help. More research is needed to complete this work but the approach is promising. Quantifying parameter uncertainty using NA and NAPG was studied using the NA-Bayes approach (NAB). We found that NAB is very sensitive to misfit magnitude but otherwise NA and NAPG produce similar uncertainty measures

    A Genetic Algorithm for Structure Prediction of Magnetic Materials

    Get PDF
    When considering global optimisation of magnetic crystal structures, it is important to consider both the atomic and spin degrees of freedom. This thesis presents a novel genetic algorithm for simultaneously optimising the magnetic and crystal structures of materials. The algorithm was first tested on a new magnetic interatomic potential presented in the thesis, and was shown to be capable of finding the correct atomic and magnetic structure. The algorithm was then used to study mixing the NiO(111)/MgO(111) interface, where the process behind the mixing was unknown. Results from this study suggest that mixing is driven by energetics of the system, rather than kinetic processes. Finally, the interface between the Heusler alloy CFAS and n-doped Ge, where experimental observations suggested an unknown interface phase, was studied. This work proposed the half Heusler structure for this phase, and predicted this to have unfavourable electronic properties

    Optimising algorithm and hardware for deep neural networks on FPGAs

    Get PDF
    This thesis proposes novel algorithm and hardware optimisation approaches to accelerate Deep Neural Networks (DNNs), including both Convolutional Neural Networks (CNNs) and Bayesian Neural Networks (BayesNNs). The first contribution of this thesis is to propose an adaptable and reconfigurable hardware design to accelerate CNNs. By analysing the computational patterns of different CNNs, a unified hardware architecture is proposed for both 2-Dimension and 3-Dimension CNNs. The accelerator is also designed with runtime adaptability, which adopts different parallelism strategies for different convolutional layers at runtime. The second contribution of this thesis is to propose a novel neural network architecture and hardware design co-optimisation approach, which improves the performance of CNNs at both algorithm and hardware levels. Our proposed three-phase co-design framework decouples network training from design space exploration, which significantly reduces the time-cost of the co-optimisation process. The third contribution of this thesis is to propose an algorithmic and hardware co-optimisation framework for accelerating BayesNNs. At the algorithmic level, three categories of structured sparsity are explored to reduce the computational complexity of BayesNNs. At the hardware level, we propose a novel hardware architecture with the aim of exploiting the structured sparsity for BayesNNs. Both algorithmic and hardware optimisations are jointly applied to push the performance limit.Open Acces

    Predictive Modelling Approach to Data-driven Computational Psychiatry

    Get PDF
    This dissertation contributes with novel predictive modelling approaches to data-driven computational psychiatry and offers alternative analyses frameworks to the standard statistical analyses in psychiatric research. In particular, this document advances research in medical data mining, especially psychiatry, via two phases. In the first phase, this document promotes research by proposing synergistic machine learning and statistical approaches for detecting patterns and developing predictive models in clinical psychiatry data to classify diseases, predict treatment outcomes or improve treatment selections. In particular, these data-driven approaches are built upon several machine learning techniques whose predictive models have been pre-processed, trained, optimised, post-processed and tested in novel computationally intensive frameworks. In the second phase, this document advances research in medical data mining by proposing several novel extensions in the area of data classification by offering a novel decision tree algorithm, which we call PIDT, based on parameterised impurities and statistical pruning approaches toward building more accurate decision trees classifiers and developing new ensemblebased classification methods. In particular, the experimental results show that by building predictive models with the novel PIDT algorithm, these models primarily led to better performance regarding accuracy and tree size than those built with traditional decision trees. The contributions of the proposed dissertation can be summarised as follow. Firstly, several statistical and machine learning algorithms, plus techniques to improve these algorithms, are explored. Secondly, prediction modelling and pattern detection approaches for the first-episode psychosis associated with cannabis use are developed. Thirdly, a new computationally intensive machine learning framework for understanding the link between cannabis use and first-episode psychosis was introduced. Then, complementary and equally sophisticated prediction models for the first-episode psychosis associated with cannabis use were developed using artificial neural networks and deep learning within the proposed novel computationally intensive framework. Lastly, an efficient novel decision tree algorithm (PIDT) based on novel parameterised impurities and statistical pruning approaches is proposed and tested with several medical datasets. These contributions can be used to guide future theory, experiment, and treatment development in medical data mining, especially psychiatry
    • …
    corecore