19 research outputs found
Accumulative Approach in Multistep Diagonal Gradient-Type Method for Large-Scale Unconstrained Optimization
This paper focuses on developing diagonal gradient-type methods that employ accumulative approach in multistep diagonal updating to determine a better Hessian approximation in each step. The interpolating curve is used to derive a generalization of the weak secant equation, which will carry the information of the local Hessian. The new parameterization of the interpolating curve in variable space is obtained by utilizing accumulative approach via a norm weighting defined by two positive definite weighting matrices. We also note that the storage needed for all computation of the proposed method is just O(n). Numerical results show that the proposed algorithm is efficient and superior by comparison with some other gradient-type methods
Development of Adaptive and Factorized Neural Models for MPC of Industrial Systems
Many industrial processes have non-linear and time-varying dynamics, for which the control and optimization require further investigations. Adaptive modelling techniques using radial basis function (RBF) networks often provide competitive modelling performances but encounter slow recovery speed when processes operating regions are shifted largely. In addition, RBF networks based model predictive control results as a non-linear programming problem, which restricts the application to fast dynamic systems. To these targets, the thesis presents the development of adaptive and factorized RBF network models. Model predictive control (MPC) based on the factorized RBF model is applied to a non-linear proton exchange membrane fuel cell (PEMFC) stack system. The main contents include three parts: RBF model adaptation; model factorization and fast long-range prediction; and MPC for the PEMFC stack system. The adaptive RBF model employs the recursive orthogonal least squares (ROLS) algorithm for both structure and parameter adaptation. In decomposing the regression matrix of the RBF model, the R matrix is obtained. Principles for adding centres and pruning centres are developed based on the manipulation of the R matrix. While the modelling accuracy is remained, the developed structure adaptation algorithm ensures the model size to be kept to the minimum. At the same time, the RBF model parameters are optimized in terms of minimum Frobenius norm of the model prediction error. A simulation example is used to evaluate the developed adaptive RBF model, and the model performance in output prediction is superior over the existing methods. Considering that a model with fast long-range prediction is needed for the MPC of fast dynamic systems, a f-step factorization algorithm is developed for the RBF model. The model structure is re-arranged so that the unknown future process outputs are not required for output prediction. Therefore, the accumulative error caused by recursive calculation in normal neural network model is avoided. Furthermore, as the information for output prediction is explicitly divided into the past information and the future information, the optimization of the control variable in the MPC based on this developed factorized model can be solved much faster than the normal NARX-RBF model. The developed model adaptation algorithm can be applied to this f-step factorized model to achieve fast and adaptive model prediction. Finally, the developed factorized RBF model is applied to the MPC of a PEMFC stack system with a popular industrial benchmark model in Simulink developed at Michigan University. The optimization algorithms for quadratic and non-linear system without and with constraints are presented and discussed for application purpose in the NMPC. Simulation results confirm the effectiveness of the developed model in both smooth tracking performance and less optimization time used. Conclusions and further work are given at the end of the thesis. Major contributions of the research have been outlined and achievements are checked against the objectives assigned. Further work is also suggested to extend the developed work to industrial applications in real-time simulation. This is to further examine the effectiveness of developed models. Extensive investigations are also recommended on the optimization problems to improve the existing algorithms
Aerodynamic parameter identification for an unmanned aerial vehicle
A dissertation submitted to the Faculty of Engineering and the Built Environment, School of Mechanical, Industrial and Aeronautical Engineering, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master of Science in Engineering.
Johannesburg, May 2016The present work describes the practical implementation of systems identification techniques to the development of a linear aerodynamic model for a small low-cost UAV equipped with a basic navigational and inertial measurement systems. The assessment of the applicability of the techniques were based on determining whether adequate aerodynamic models could be developed to aid in the reduction of wind tunnel testing when characterising new UAVs. The identification process consisted of postulating a model structure, flight test manoeuvre design, data reconstruction, aerodynamic parameter estimation, and model validation. The estimators that were used for the post-flight identification were the output error maximum likelihood method and an iterated extended Kalman filter with a global smoother. SIDPAC and FVSysID systems identification toolboxes were utilised and modified where appropriate. The instrumentation system on board the UAV consisted of three-axis accelerometers and gyroscopes, a three-axis vector magnetometer and GPS tracking while data was logged at 25 Hz. The angle of attack and angle of sideslip were not measured directly and were estimated using tailored data reconstruction methods. Adequate time domain lateral model correlation with flight data was achieved for the cruise flight condition. Adequacy was assessed against Theil’s inequality coefficients and Theil’s covariance. It was found that the simplified estimation algorithms based on the linearized equations of motion yielded the most promising model matches. Due to the high correlation between the pitch damping derivatives, the longitudinal analysis did not yield valid model parameter estimates. Even though the accuracy of the resulting models was below initial expectations, the detailed data compatibility analysis provided valuable insight into estimator limitations, instrumentation requirements and test procedures for systems identification on low-cost UAVs.MT201
Modelling and quantification of structural uncertainties in petroleum reservoirs assisted by a hybrid cartesian cut cell/enriched multipoint flux approximation approach
Efficient and profitable oil production is subject to make reliable predictions about
reservoir performance. However, restricted knowledge about reservoir distributed
properties and reservoir structure calls for History Matching in which the reservoir
model is calibrated to emulate the field observed history. Such an inverse problem
yields multiple history-matched models which might result in different predictions of
reservoir performance. Uncertainty Quantification restricts the raised model
uncertainties and boosts the model reliability for the forecasts of future reservoir
behaviour. Conventional approaches of Uncertainty Quantification ignore large scale
uncertainties related to reservoir structure, while structural uncertainties can influence
the reservoir forecasts more intensely compared with petrophysical uncertainty.
What makes the quantification of structural uncertainty impracticable is the need for
global regridding at each step of History Matching process. To resolve this obstacle, we
develop an efficient methodology based on Cartesian Cut Cell Method which decouples
the model from its representation onto the grid and allows uncertain structures to be
varied as a part of History Matching process. Reduced numerical accuracy due to cell
degeneracies in the vicinity of geological structures is adequately compensated with an
enhanced scheme of class Locally Conservative Flux Continuous Methods (Extended
Enriched Multipoint Flux Approximation Method abbreviated to extended EMPFA).
The robustness and consistency of proposed Hybrid Cartesian Cut Cell/extended
EMPFA approach are demonstrated in terms of true representation of geological
structures influence on flow behaviour. In this research, the general framework of
Uncertainty Quantification is extended and well-equipped by proposed approach to
tackle uncertainties of different structures such as reservoir horizons, bedding layers,
faults and pinchouts. Significant improvements in the quality of reservoir recovery
forecasts and reservoir volume estimation are presented for synthetic models of
uncertain structures. Also this thesis provides a comparative study of structural
uncertainty influence on reservoir forecasts among various geological structures
Optimal measurement locations for parameter estimation of distributed parameter systems
Identifying the parameters with the largest influence on the predicted outputs of a model revealswhich parameters need to be known more precisely to reduce the overall uncertainty on themodel output. A large improvement of such models would result when uncertainties in the keymodel parameters are reduced. To achieve this, new experiments could be very helpful,especially if the measurements are taken at the spatio-temporal locations that allow estimate the parameters in an optimal way. After evaluating the methodologies available for optimal sensor location, a few observations were drawn. The method based on the Gram determinant evolution can report results not according to what should be expected. This method is strongly dependent of the sensitivity coefficients behaviour. The approach based on the maximum angle between subspaces, in some cases, produced more that one optimal solution. It was observed that this method depends on the magnitude of outputs values and report the measurement positions where the outputs reached their extrema values. The D-optimal design method produces number and locations of the optimal measurements and it depends strongly of the sensitivity coefficients, but mostly of their behaviours. In general it was observed that the measurements should be taken at the locations where the extrema values (sensitivity coefficients, POD modes and/or outputs values) are reached. Further improvements can be obtained when a reduced model of the system is employed. This is computationally less expensive and the best estimation of the parameter is obtained, even with experimental data contaminated with noise. A new approach to calculate the time coefficients belonging to an empirical approximator based on the POD-modes derived from experimental data is introduced. Additionally, an artificial neural network can be used to calculate the derivatives but only for systems without complex nonlinear behaviour. The latter two approximations are very valuable and useful especially if the model of the system is unknown.EThOS - Electronic Theses Online ServiceUniversidad del Zulia, Maracaibo, VenezuelaGBUnited Kingdo
Numerical and experimental studies of asymmetrical Single Point Incremental Forming process
The framework of the present work supports the numerical analysis of the Single Point Incremental Forming (SPIF) process resorting to a numerical tool based on adaptive remeshing procedure based on the FEM. Mainly, this analysis concerns the computation time reduction from the implicit scheme and the adaptation of a solid-shell finite element type chosen, in particular the Reduced Enhanced Solid Shell (RESS). The main focus of its choice was given to the element formulation due to its distinct feature based on arbitrary number of integration points through the thickness direction. As well as the use of only one Enhanced Assumed Strain (EAS) mode. Additionally, the advantages include the use of full constitutive laws and automatic consideration of double-sided contact, once it contains eighth physical nodes.
Initially, a comprehensive literature review of the Incremental Sheet Forming (ISF) processes was performed. This review is focused on original contributions regarding recent developments, explanations for the increased formability and on the state of the art in finite elements simulations of SPIF. Following, a description of the numerical formulation behind the numerical tools used throughout this research is presented, summarizing non-linear mechanics topics related with finite element in-house code named LAGAMINE, the elements formulation and constitutive laws. The main purpose of the present work is given to the application of an adaptive remeshing method combined with a solid-shell finite element type in order to improve the computational efficiency using the implicit scheme. The adaptive remeshing strategy is based on the dynamic refinement of the mesh locally in the tool vicinity and following its motion. This request is needed due to the necessity of very refined meshes to simulate accurately the SPIF simulations. An initially mesh refinement solution requires huge computation time and coarse mesh leads to an inconsistent results due to contact issues. Doing so, the adaptive remeshing avoids the initially refinement and subsequently the CPU time can be reduced.
The numerical tests carried out are based on benchmark proposals and experiments purposely performed in University of Aveiro, Department of Mechanical engineering, resorting to an innovative prototype SPIF machine. As well, all simulations performed were validated resorting to experimental measurements in order to assess the level of accuracy between the numerical prediction and the experimental measurements. In general, the accuracy and computational efficiency of the results are achieved
Metodologia avançada para simulação de processos de estampagem incremental
Doutoramento em Engenharia MecânicaThe framework of the present work supports the numerical analysis of the
Single Point Incremental Forming (SPIF) process resorting to a numerical tool
based on adaptive remeshing procedure based on the FEM. Mainly, this
analysis concerns the computation time reduction from the implicit scheme and
the adaptation of a solid-shell finite element type chosen, in particular the
Reduced Enhanced Solid Shell (RESS). The main focus of its choice was given
to the element formulation due to its distinct feature based on arbitrary number
of integration points through the thickness direction. As well as the use of only
one Enhanced Assumed Strain (EAS) mode. Additionally, the advantages
include the use of full constitutive laws and automatic consideration of doublesided
contact, once it contains eighth physical nodes.
Initially, a comprehensive literature review of the Incremental Sheet Forming
(ISF) processes was performed. This review is focused on original contributions
regarding recent developments, explanations for the increased formability and
on the state of the art in finite elements simulations of SPIF. Following, a
description of the numerical formulation behind the numerical tools used
throughout this research is presented, summarizing non-linear mechanics
topics related with finite element in-house code named LAGAMINE, the
elements formulation and constitutive laws.
The main purpose of the present work is given to the application of an adaptive
remeshing method combined with a solid-shell finite element type in order to
improve the computational efficiency using the implicit scheme. The adaptive
remeshing strategy is based on the dynamic refinement of the mesh locally in
the tool vicinity and following its motion. This request is needed due to the
necessity of very refined meshes to simulate accurately the SPIF simulations.
An initially mesh refinement solution requires huge computation time and
coarse mesh leads to an inconsistent results due to contact issues. Doing so,
the adaptive remeshing avoids the initially refinement and subsequently the
CPU time can be reduced.
The numerical tests carried out are based on benchmark proposals and
experiments purposely performed in University of Aveiro, Department of
Mechanical engineering, resorting to an innovative prototype SPIF machine.
As well, all simulations performed were validated resorting to experimental
measurements in order to assess the level of accuracy between the numerical
prediction and the experimental measurements. In general, the accuracy and
computational efficiency of the results are achieved.O presente trabalho assenta na análise numérica do processo de Estampagem
Incremental por Único Ponto (SPIF) recorrendo ao refinamento adaptativo da
malha através do Método dos Elementos Finitos (FEM). Nomeadamente, a
atenção é dada à redução do tempo de cálculo baseado no esquema de
integração implícito em combinação com um elemento finito do tipo “sólidocasca”
predefinido. O principal motivo da escolha do tipo de elemento finito
deve-se à sua formulação possibilitar a atribuição de um número arbitrário de
pontos de integração na direção da espessura combinado com a utilização de
um único modo de deformação acrescentada e integração reduzida no plano.
Além disso, as vantagens incluem a utilização de leis constitutivas
tridimensionais, análise automática de contacto em dupla face e espessura,
uma vez que é um elemento hexaédrico de 8 nós.
Inicialmente, uma revisão da literatura relacionada com o processo de
estampagem incremental (ISF) é apresentada evidenciando as contribuições
recentemente desenvolvidas, explicações do aumento da formabilidade do
material em ISF e com maior ênfase o estado-de-arte das simulações
numéricas pelo FEM do processo SPIF. Seguidamente, é apresentado a
descrição dos conceitos teóricos que suportam e foram utilizados ao longo
desta pesquisa, resumindo tópicos de mecânica não-linear relacionada com o
código LAGAMINE, formulação de elementos finitos e leis constitutivas.
O principal objetivo do presente trabalho é a aplicação do método de
refinamento adaptativo combinado com um elemento finito sólido-casca, a fim
de melhorar a eficiência computacional usando o esquema de integração
implícito. A estratégia de refinamento adaptativo é baseada no refinamento
dinâmico da malha localmente na proximidade da ferramenta e acompanhando
o seu movimento. Este requisito é devido à necessidade de malhas muito
refinadas para simular com precisão as simulações SPIF. A malha inicialmente
refinada requer enorme tempo de cálculo e uma malha grosseira leva a
resultados inconsistentes devido a problemas de contato. Neste sentido, o
refinamento adaptativo evita o refinamento inicial total da malha e
consequentemente melhora a performance computacional da simulação.
Os testes numéricos realizados são baseados em casos estudo e em testes
experimentais realizados na Universidade de Aveiro, Departamento de
Engenharia Mecânica, recorrendo a uma máquina protótipo inovadora
construída propositadamente para SPIF. Todas as simulações realizadas são
validadas recorrendo às medições experimentais, de modo a avaliar o nível de
precisão entre a previsão numérica e as medições experimentais. Em geral, a
precisão e a eficiência computacional dos resultados são alcançados
Design and debugging of multi-step analog to digital converters
With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process
A data driven approach for diagnosis and management of yield variability attributed to soil constraints
Australian agriculture does not value data to the level required for true precision management. Consequently, agronomic recommendations are frequently based on limited soil information and do not adequately address the spatial variance of the constraints presented. This leads to lost productivity. Due to the costs of soil analysis, land owners and practitioners are often reluctant to invest in soil sampling exercises as the likely economic gain from this investment has not been adequately investigated. A value proposition is therefore required to realise the agronomic and economic benefits of increased site-specific data collection with the aim of ameliorating soil constraints. This study is principally concerned with identifying this value proposition by investigating the spatially variable nature of soil constraints and their interactions with crop yield at the sub-field scale. Agronomic and economic benefits are quantified against simulated ameliorant recommendations made on the basis of varied sampling approaches.
In order to assess the effects of sampling density on agronomic recommendations, a 108 ha site was investigated, where 1200 direct soil measurements were obtained (300 sample locations at 4 depth increments) to form a benchmark dataset for analysis used in this study. Random transect sampling (for field average estimates), zone management, regression kriging (SSPFe) and ordinary kriging approaches were first investigated at various sampling densities (N=10, 20, 50, 100, 150, 200, 250 and 300) to observe the effects of lime and gypsum ameliorant recommendation advice. It was identified that the ordinary kriging method provided the most accurate spatial recommendation advice for gypsum and lime at all depth increments investigated (i.e. 0–10 cm, 10–20 cm, 20–40 cm and 40–60 cm), with the majority of improved accuracy being achieved up to 50 samples (≈0.5 samples/ha). The lack of correlation between the environmental covariates and target soil variables inhibited the ability for regression kriging to outperform ordinary kriging.
To extend these findings in an attempt to identify the economically optimal sampling density for the investigation site, a yield prediction model was required to estimate the spatial yield response due to amelioration. Given the complex nonlinear relationships between soil properties and yield, this was achieved by applying four machine learning models (both linear and nonlinear) consisting of a mixed-linear regression, a regression tree (Cubist), an artificial neural network and a support vector machine. These were trained using the 1200 directly measured soil samples, each with 9 soil measurements describing structural features (i.e. soil pH, exchangeable sodium percentage, electrical conductivity, clay, silt, sand, bulk density, potassium, cation exchange capacity) to predict the spatial yield variability at the investigation site with four years of yield data. It was concluded that the Cubist regression tree model produced superior results in terms of improved generalization, whilst achieving an acceptable R2 for training and validation (up to R2 =0.80 for training and R2 =0.78 for validation). The lack of temporal yield information constrained the ability to develop a temporally stable yield prediction model to account for the uncertainties of climate interactions associated with the spatial variability of yield. Accurate predictive performance was achieved for single-season models.
Of the spatial prediction methods investigated, random transect sampling and ordinary kriging approaches were adopted to simulate ‘blanket-rate’ (BR) and ‘variable-rate’ (VR) gypsum applications, respectively, for the amelioration of sodicity at the investigated site. For each sampling density, the spatial yield response as a result of a BR and VR application of gypsum was estimated by application of the developed Cubist yield prediction model, calibrated for the investigation site. Accounting for the cost of sampling and financial gains, due to a yield response, the most economically optimum sampling density for the investigation site was 0.2 cores/ha for 0–20 cm treatment and 0.5 cores/ha for 0–60 cm treatment taking a VR approach. Whilst this resulted in an increased soil data investment of 136/ha for 0–20 cm and 0–60 cm treatment respectively in comparison to a BR approach, the yield gains due to an improved spatial gypsum application were in excess of 6 t and 26 t per annum. Consequently, the net benefit of increased data investment was estimated to be up to $104,000 after 20 years for 0–60 cm profile treatment.
Identifying the influence on qualitative data and management information on soil-yield interaction, a probabilistic approach was investigated to offer an alternative approach where empirical models fail. Using soil compaction as an example, a Bayesian Belief Network was developed to explore the interactions of machine loading, soil wetness and site characteristics with the potential yield declines due to compaction induced by agricultural traffic. The developed tool was subsequently able to broadly describe the agronomic impacts of decisions made in data limiting environments.
This body of work presents a combined approach to improving both the diagnosis and management of soil constraints using a data driven approach. Subsequently, a detailed discussion is provided to further this work, and improve upon the results obtained. By continuing this work it is possible to change the industry attitude to data collection and significantly improve the productivity, profitability and soil husbandry of agricultural systems
Biopolymers in Drug Delivery and Regenerative Medicine
Biopolymers including natural (e.g., polysaccharides, proteins, gums, natural rubbers, bacterial polymers), synthetic (e.g., aliphatic polyesters and polyphosphoester), and biocomposites are of paramount interest in regenerative medicine, due to their availability, processability, and low toxicity. Moreover, the structuration of biopolymer-based materials at the nano- and microscale along with their chemical properties are crucial in the engineering of advanced carriers for drug products. Finally, combination products including or based on biopolymers for controlled drug release offer a powerful solution to improve the tissue integration and biological response of these materials. Understanding the drug delivery mechanisms, efficiency, and toxicity of such systems may be useful for regenerative medicine and pharmaceutical technology. The main aim of the Special Issue on “Biopolymers in Drug Delivery and Regenerative Medicine” is to gather recent findings and current advances on biopolymer research for biomedical applications, particularly in regenerative medicine, wound healing, and drug delivery. Contributions to this issue can be as original research or review articles and may cover all aspects of biopolymer research, ranging from the chemical synthesis and characterization of modified biopolymers, their processing in different morphologies and hierarchical structures, as well as their assessment for biomedical uses