62 research outputs found

    Noise modelling, vibro-acoustic analysis, artificial neural networks on offshore platform

    Get PDF
    PhD ThesisDue to the limitations of the present noise prediction methods used in the offshore industry, this research is aimed to develop an efficient noise prediction technique that can analyze and predict the noise level for the offshore platform environment during the design stage as practically as possible to meet the criteria for crews’ comfort against high noise level. Several studies have been carried out to improve the understanding of acoustic environment onboard offshore platform, as well as the present prediction techniques. The noise prediction methods for the offshore platform were proposed from three aspects: by empirical acoustic modeling, analytical computation or neural network method. First, through evaluating the five-selected empirical acoustic models originated from other applications and statistical energy analaysis with direct field (SEA-DF), Heerema and Hodgson model was selected for calculating the sound level in the machinery room on the offshore platform. Second, the analytical model modeled three-dimensional fully coupled structural and acoustic systems by considering of the structural coupling force and the moment at edges, and structural-acoustic interaction on the interface. Artificial spring technique was implemented to illustrate the general coupling and boundary conditions. The use of Chebyshev expansions solutions ensured the accuracy and rapid convergence of the three-dimensional problem of single room and conjugate rooms. The proposed model was validated by checking natural frequencies and responses of against the results obtained from finite element software. Third, a modified multiple generalised regression neural network (GRNN) was first proposed to predict the noise level of various compartments onboard of the offshore platform with limited samples available. By preprocessing the samples with fuzzy c-means (FCM) and principal component analysis (PCA), dominant input features can be identified before commencing the GRNN’s training process. With optimal spread variables, the newly developed tool showed comparable performance to the SEA-DF and empirical formula that requires less time and resources to solve during the early stage of the offshore platform design.Singapore Economic Development Board (EDB) for providing the funding for the research under EDB-Industrial Postgraduate Programme (IPP) with SembCorp Marine in Singapore

    User-friendly optimization approach of fed-batch fermentation conditions for the production of iturin A using artificial neural networks and support vector machine

    Get PDF
    Background: In the field of microbial fermentation technology, how to optimize the fermentation conditions is of great crucial for practical applications. Here, we use artificial neural networks (ANNs) and support vector machine (SVM) to offer a series of effective optimization methods for the production of iturin A. The concentration levels of asparagine (Asn), glutamic acid (Glu) and proline (Pro) (mg/L) were set as independent variables, while the iturin A titer (U/mL) was set as dependent variable. General regression neural network (GRNN), multilayer feed-forward neural networks (MLFNs) and the SVM were developed. Comparisons were made among different ANNs and the SVM. Results: The GRNN has the lowest RMS error (457.88) and the shortest training time (1 s), with a steady fluctuation during repeated experiments, whereas the MLFNs have comparatively higher RMS errors and longer training times, which have a significant fluctuation with the change of nodes. In terms of the SVM, it also has a relatively low RMS error (466.13), with a short training time (1 s). Conclusion: According to the modeling results, the GRNN is considered as the most suitable ANN model for the design of the fed-batch fermentation conditions for the production of iturin A because of its high robustness and precision, and the SVM is also considered as a very suitable alternative model. Under the tolerance of 30%, the prediction accuracies of the GRNN and SVM are both 100% respectively in repeated experiments

    Modelling the structure of Australian Wool Auction prices

    Get PDF
    The largest wool exporter in the world is Australia, where wool being a major export is worth over AUD $2 billion per year and constitutes about 17 per cent of all agricultural exports. Most Australian wool is sold by auctions in three regional centres. The prices paid in these auction markets are used by the Australian production and service sectors to identify the quality preferences of the international retail markets and the intermediate processors. One ongoing problem faced by wool growers has been the lack of clear market signals on the relative importance of wool attributes with respect to the price they receive at auction. The goal of our research is to model the structure of Australian wool auction prices. We aim to optimise the information that can be extracted and used by the production and service sectors in producing and distributing the raw wool clip.Most of the previous methods of modelling and predicting wool auction prices employed by the industry have involved multiple-linear regressions. These methods have proven to be inadequate because they have too many assumptions and deficiencies. This has prompted alternative approaches such as neural networks and tree-based regression methods. In this thesis we discuss these alternative approaches. We observe that neural network methods offer good prediction accuracy of price but give minimal understanding of the price driving variables. On the other hand, tree-based regression methods offer good interpretability of the price driving characteristics but do not give good prediction accuracy of price. This motivates a hybrid approach that combines the best of the tree-based methods and neural networks, offering both prediction accuracy and interpretability.Additionally, there also exists a wool specifications problem. Industrial sorting of wool during harvest, and at the start of processing, assembles wool in bins according to the required wool specifications. At present this assembly is done by constraining the range of all specifications in each bin, and having either a very large number of bins, or a large variance of characteristics within each bin. Multiple-linear regression on price does not provide additional useful information that would streamline this process, nor does it assist in delineating the specifications of individual bins.In this thesis we will present a hybrid modular approach combining the interpretability of a regression tree with the prediction accuracy of neural networks. Our procedure was inspired by Breiman and Shang’s idea of a “representer tree” (also known as a “born again tree”) but with two main modifications: 1) we use a much more accurate Neural Network in place of a multiple tree method, and 2) we use our own modified smearing method which involves adding Gaussian noise. Our methodology has not previously been used for wool auction data and the accompanying price prediction problem. The numeric predictions from our method are highly competitive with other methods. Our method also provides an unprecedented level of clarity and interpretability of the price driving variables in the form of tree diagrams, and the tabular form of these trees developed in our research. These are extremely useful for wool growers and other casual observers who may not have a higher level understanding of modelling and mathematics. This method is also highly modular and can be continually extended and improved. We will detail this approach and illustrate it with real data.The more accurate modelling and analysis helps wool growers to better understand the market behaviour. If the important factors are identified, then effective strategies can be developed to maximise return to the growers.In Chapter 1 of this thesis, we present a brief overview of the Australian wool auction market. We then discuss the problems faced by the wool growers and their significance, which motivate our research.In Chapter 2, we define the predictive aspect of the modelling problem and present the data that is available to us for our research. We introduce the assumptions that must be made in order to model the auction data and predict the wool prices.Chapter 3 discusses neural networks and their potential in our wool auction problem. Neural networks are known to give good results in many modern applications resolving industrial problems. As a result of the popularity of such methods and the ongoing development of them, our research partner, the Department of Agriculture and Food, Government of Western Australia, performed a preliminary investigation into neural networks and found them to give satisfactory predictions of wool auction prices. In our Chapter 3, we perform an analysis and assessment of neural networks, specifically, the generalised regression neural networks (GRNN). We look at the strengths and weaknesses of GRNN, and apply them to the wool auction problem and comment on their relevance and usability in our wool problem. We detail the problems we face, and why neural networks alone may not be the best approach for the wool auction problem, thus laying the foundation for the development of our hybrid modular approach in Chapter 5. We also use the numerical prediction results from GRNN as the benchmark in our comparisons of different modelling methods in the rest of this thesis.Chapter 4 details the tree-based regression methods, as an alternate approach to neural networks. In analysing the tree-based methods with our wool auction data, we illustrate the tree methods’ advantages over neural networks, as well as the trade-offs, with our auction data. We also demonstrate how powerful and useful a tree diagram can be to the wool auction problem. And in this Chapter, we improve a typical tree diagram further by introducing our own tabular form of the tree, which can be of immerse use to wool growers. In particular, we can use our tabular form to solve the wool specification problem mentioned earlier, and we incorporate this tabular form as part of a new hybrid methodology in Chapter 5. In Chapter 4 we also consider the ensemble methods such as bootstrap aggregating (bagging) and random forests, and discuss their results. We demonstrate that, the ensemble methods provide higher prediction accuracies than ordinary regression trees by introducing many trees into the model. But this is at the expense of losing the simplicity and clarity of having only a single tree. However, the study of assemble methods do end up providing an excellent idea for our hybrid approach in Chapter 5.Chapter 5 details the new hybrid approach we developed as a result of our work in Chapters 3 and 4 using neural networks and tree-based regression methods. Our hybrid approach combines the two methods with their respective strengths. We apply our new approach to the data, compare the results with our earlier work in neural networks and tree-based regression methods, then discuss the results.Finally, we conclude our thesis with Chapter 6, discussing the potential of our new hybrid approach and the directions of possible future works

    Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    Get PDF
    This research investigates how aggregation is currently conducted for simulation of large systems. The purpose is to examine how to achieve suitable aggregation in the simulation of large systems. More specifically, investigating how to accurately aggregate hierarchical lower-level (higher resolution) models into the next higher-level in order to reduce the complexity of the overall simulation model. The focus is on the exploration of the different aggregation techniques for hierarchical lower-level (higher resolution) models into the next higher-level. We develop aggregation procedures between two simulation levels (e.g., aggregation of engagement level models into a mission level model) to address how much and what information needs to pass from the high resolution to the low-resolution model in order to preserve statistical fidelity. We present a mathematical representation of the simulation model based on network theory and procedures for simulation aggregation that are logical and executable. This research examines the effectiveness of several statistical techniques, to include regression and three types of artificial neural networks, as an aggregation technique in predicting outputs of the lower-level model and evaluating its effects as an input into the next higher-level model. The proposed process is a collection of various conventional statistical and aggregation techniques, to include one novel concept and extensions to the regression and neural network methods, which are compared to the truth simulation model, where the truth model is when actual lower-level model outputs are used as a direct input into the next higher-level model. The aggregation methodology developed in this research provides an analytic foundation that formally defines the necessary steps essential in appropriately and effectively simulating large hierarchical systems

    Prediction Interval Estimation Techniques for Empirical Modeling Strategies and their Applications to Signal Validation Tasks

    Get PDF
    The basis of this work was to evaluate both parametric and non-parametric empirical modeling strategies applied to signal validation or on-line monitoring tasks. On-line monitoring methods assess signal channel performance to aid in making instrument calibration decisions, enabling the use of condition-based calibration schedules. The three non-linear empirical modeling strategies studied were: artificial neural networks (ANN), neural network partial least squares (NNPLS), and local polynomial regression (LPR). These three types are the most common nonlinear models for applications to signal validation tasks. Of the class of local polynomials (for LPR), two were studied in this work: zero-order (kernel regression), and first-order (local linear regression). The evaluation of the empirical modeling strategies includes the presentation and derivation of prediction intervals for each of three different model types studied so that estimations could be made with an associated prediction interval. An estimate and its corresponding prediction interval contain the measurements with a specified certainty, usually 95%. The prediction interval estimates were compared to results obtained from bootstrapping via Monte Carlo resampling, to validate their expected accuracy. The estimation of prediction intervals applied to on-line monitoring systems is essential if widespread use of these empirical based systems is to be attained. In response to the topical report On-Line Monitoring of Instrument Channel Performance, published by the Electric Power Research Institute [Davis 1998], the NRC issued a safety evaluation report that identified the need to evaluate the associated uncertainty of empirical model estimations from all contributing sources. This need forms the basis for the research completed and reported in this dissertation. The focus of this work, and basis of its original contributions, were to provide an accurate prediction interval estimation method for each of the mentioned empirical modeling techniques, and to verify the results via bootstrap simulation studies. Properly determined prediction interval estimates were obtained that consistently captured the uncertainty of the given model such that the level of certainty of the intervals closely matched the observed level of coverage of the prediction intervals over the measured values. In most cases the expected level of coverage of the measured values within the prediction intervals was 95%, such that the probability that an estimate and its associated prediction interval contain the corresponding measured observation was 95%. The results also indicate that instrument channel drifts are identifiable through the use of the developed prediction intervals by observing the drop in the level of coverage of the prediction intervals to relatively low values, e.g. 30%. While all empirical models exhibit optimal performance for a given set of specifications, the identification of this optimal set may be difficult to attain. The developed methods of prediction interval estimation were shown to perform as expected over a wide range of model specifications, including misspecification. Model misspecification occurs through different mechanisms dependent on the type of empirical model. The main mechanisms under which model misspecification occur for each empirical model studied are: ANN – through architecture selection, NNPLS – through latent variable selection, LPR – through bandwidth selection. In addition, all of the above empirical models are susceptible to misspecification due to inadequate data and the presence of erroneous predictor variables in the set of predictors. A study was completed to verify that the presence of erroneous variables, i.e. unrelated to the desired response or random noise components, resulted in increases in the prediction interval magnitudes while maintaining the appropriate level of coverage for the response measurements. In addition to considering the resultant prediction intervals and coverage values, a comparative evaluation of the different empirical models was performed. The evaluation considers the average estimation errors and the stability of the models under repeated Monte Carlo resampling. The results indicate the large uncertainty of ANN models applied to collinear data, and the utility of the NNPLS model for the same purpose. While the results from the LPR models remained consistent for data with or without collinearity, assuming proper regularization was applied. The quantification of the uncertainty of an empirical model\u27s estimations is a necessary task for promoting the use of on-line monitoring systems in the nuclear power industry. All of the methods studied herein were applied to a simulated data set for an initial evaluation of the methods, and data from two different U.S. nuclear power plants for the purposes of signal validation for on-line monitoring tasks

    Comparative Deterministic and Probabilistic Modeling in Geotechnics: Applications to Stabilization of Organic Soils, Determination of Unknown Foundations for Bridge Scour, and One-Dimensional Diffusion Processes

    Get PDF
    This study presents different aspects on the use of deterministic methods including Artificial Neural Networks (ANNs), and linear and nonlinear regression, as well as probabilistic methods including Bayesian inference and Monte Carlo methods to develop reliable solutions for challenging problems in geotechnics. This study addresses the theoretical and computational advantages and limitations of these methods in application to: 1) prediction of the stiffness and strength of stabilized organic soils, 2) determination of unknown foundations for bridges vulnerable to scour, and 3) uncertainty quantification for one-dimensional diffusion processes. ANNs were successfully implemented in this study to develop nonlinear models for the mechanical properties of stabilized organic soils. ANN models were able to learn from the training examples and then generalize the trend to make predictions for the stiffness and strength of stabilized organic soils. A stepwise parameter selection and a sensitivity analysis method were implemented to identify the most relevant factors for the prediction of the stiffness and strength. Also, the variations of the stiffness and strength with respect to each factor were investigated. A deterministic and a probabilistic approach were proposed to evaluate the characteristics of unknown foundations of bridges subjected to scour. The proposed methods were successfully implemented and validated by collecting data for bridges in the Bryan District. ANN models were developed and trained using the database of bridges to predict the foundation type and embedment depth. The probabilistic Bayesian approach generated probability distributions for the foundation and soil characteristics and was able to capture the uncertainty in the predictions. The parametric and numerical uncertainties in the one-dimensional diffusion process were evaluated under varying observation conditions. The inverse problem was solved using Bayesian inference formulated by both the analytical and numerical solutions of the ordinary differential equation of diffusion. The numerical uncertainty was evaluated by comparing the mean and standard deviation of the posterior realizations of the process corresponding to the analytical and numerical solutions of the forward problem. It was shown that higher correlation in the structure of the observations increased both parametric and numerical uncertainties, whereas increasing the number of data dramatically decreased the uncertainties in the diffusion process

    Experimental and Numerical Analysis of Ethanol Fueled HCCI Engine

    Get PDF
    Presently, the research on the homogeneous charge compression ignition (HCCI) engines has gained importance in the field of automotive power applications due to its superior efficiency and low emissions compared to the conventional internal combustion (IC) engines. In principle, the HCCI uses premixed lean homogeneous charge that auto-ignites volumetrically throughout the cylinder. The homogeneous mixture preparation is the main key to achieve high fuel economy and low exhaust emissions from the HCCI engines. In the recent past, different techniques to prepare homogeneous mixture have been explored. The major problem associated with the HCCI is to control the auto-ignition over wide range of engine operating conditions. The control strategies for the HCCI engines were also explored. This dissertation investigates the utilization of ethanol, a potential major contributor to the fuel economy of the future. Port fuel injection (PFI) strategy was used to prepare the homogeneous mixture external to the engine cylinder in a constant speed, single cylinder, four stroke air cooled engine which was operated on HCCI mode. Seven modules of work have been proposed and carried out in this research work to establish the results of using ethanol as a potential fuel in the HCCI engine. Ethanol has a low Cetane number and thus it cannot be auto-ignited easily. Therefore, intake air preheating was used to achieve auto-ignition temperatures. In the first module of work, the ethanol fueled HCCI engine was thermodynamically analysed to determine the operating domain. The minimum intake air temperature requirement to achieve auto-ignition and stable HCCI combustion was found to be 130 °C. Whereas, the knock limit of the engine limited the maximum intake air temperature of 170 °C. Therefore, the intake air temperature range was fixed between 130-170 °C for the ethanol fueled HCCI operation. In the second module of work, experiments were conducted with the variation of intake air temperature from 130-170 °C at a regular interval of 10 °C. It was found that, the increase in the intake air temperature advanced the combustion phase and decreased the exhaust gas temperature. At 170 °C, the maximum combustion efficiency and thermal efficiency were found to be 98.2% and 43% respectively. The NO emission and smoke emissionswere found to be below 11 ppm and 0.1% respectively throughout this study. From these results of high efficiency and low emissions from the HCCI engine, the following were determined using TOPSIS method. They are (i) choosing the best operating condition, and (ii) which input parameter has the greater influence on the HCCI output. In the third module of work, TOPSIS - a multi-criteria decision making technique was used to evaluate the optimum operating conditions. The optimal HCCI operating condition was found at 70% load and 170 °C charge temperature. The analysis of variance (ANOVA) test results revealed that, the charge temperature would be the most significant parameter followed by the engine load. The percentage contribution of charge temperature and load were63.04% and 27.89% respectively. In the fourth module of work, the GRNN algorithm was used to predict the output parameters of the HCCI engine. The network was trained, validated, and tested with the experimental data sets. Initially, the network was trained with the 60% of the experimental data sets. Further, the validation and testing of the network was done with each 20% data sets. The validation results predicted that, the output parameters those lie within 2% error. The results also showed that, the GRNN models would be advantageous for network simplicity and require less sparse data. The developed new tool efficiently predicted the relation between the input and output parameters. In the fifth module of work, the EGR was used to control the HCCI combustion. An optimum of 5% EGR was found to be optimum, further increase in the EGR caused increase in the hydrocarbon (HC) emissions. The maximum brake thermal efficiency of 45% was found for 170 °C charge temperature at 80% engine load. The NO emission and smoke emission were found to be below 10 ppm and 0.61% respectively. In the sixth module of work, a hybrid GRNN-PSO model was developed to optimize the ethanol-fueled HCCI engine based on the output performance and emission parameters. The GRNN network interpretive of the probability estimate such that it can predict the performance and emission parameters of HCCI engine within the range of input parameters. Since GRNN cannot optimize the solution, and hence swarm based adaptive mechanism was hybridized. A new fitness function was developed by considering the six engine output parameters. For the developed fitness function, constrained optimization criteria were implemented in four cases. The optimum HCCI engine operating conditions for the general criteria were found to be 170 °C charge temperature, 72% engine load, and 4% EGR. This model consumed about 60-75 ms for the HCCI engine optimization. In the last module of work, an external fuel vaporizer was used to prepare the ethanol fuel vapour and admitted into the HCCI engine. The maximum brake thermal efficiency of 46% was found for 170 °C charge temperature at 80% engine load. The NO emission and smoke emission were found to be below 5 ppm and 0.45% respectively. Overall, it is concluded that, the HCCI combustion of sole ethanol fuel is possible with the charge heating only. The high load limit of HCCI can be extended with ethanol fuel. High thermal efficiency and low emissions were possible with ethanol fueled HCCI to meet the current demand

    Flood Forecasting Using Machine Learning Methods

    Get PDF
    This book is a printed edition of the Special Issue Flood Forecasting Using Machine Learning Methods that was published in Wate
    corecore