8,946 research outputs found

    Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions

    Full text link
    Generative Adversarial Networks (GANs) is a novel class of deep generative models which has recently gained significant attention. GANs learns complex and high-dimensional distributions implicitly over images, audio, and data. However, there exists major challenges in training of GANs, i.e., mode collapse, non-convergence and instability, due to inappropriate design of network architecture, use of objective function and selection of optimization algorithm. Recently, to address these challenges, several solutions for better design and optimization of GANs have been investigated based on techniques of re-engineered network architectures, new objective functions and alternative optimization algorithms. To the best of our knowledge, there is no existing survey that has particularly focused on broad and systematic developments of these solutions. In this study, we perform a comprehensive survey of the advancements in GANs design and optimization solutions proposed to handle GANs challenges. We first identify key research issues within each design and optimization technique and then propose a new taxonomy to structure solutions by key research issues. In accordance with the taxonomy, we provide a detailed discussion on different GANs variants proposed within each solution and their relationships. Finally, based on the insights gained, we present the promising research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table

    Bayesian analysis of multifidelity computer models with local features and non-nested experimental designs: Application to the WRF model

    Get PDF
    Motivated by a multi-fidelity Weather Research and Forecasting (WRF) climate model application where the available simulations are not generated based on hierarchically nested experimental design, we develop a new co-kriging procedure called Augmented Bayesian Treed Co-Kriging. The proposed procedure extends the scope of co-kriging in two major ways. We introduce a binary treed partition latent process in the multifidelity setting to account for non-stationary and potential discontinuities in the model outputs at different fidelity levels. Moreover, we introduce an efficient imputation mechanism which allows the practical implementation of co-kriging when the experimental design is non-hierarchically nested by enabling the specification of semi-conjugate priors. Our imputation strategy allows the design of an efficient RJ-MCMC implementation that involves collapsed blocks and direct simulation from conditional distributions. We develop the Monte Carlo recursive emulator which provides a Monte Carlo proxy for the full predictive distribution of the model output at each fidelity level, in a computationally feasible manner. The performance of our method is demonstrated on benchmark examples and used for the analysis of a large-scale climate modeling application which involves the WRF model

    Uncertainty Quantification for complex computer models with nonstationary output. Bayesian optimal design for iterative refocussing

    Get PDF
    In this thesis, we provide the Uncertainty Quantification (UQ) tools to assist automatic and robust calibration of complex computer models. Our tools allow users to construct a cheap (statistical) surrogate, a Gaussian process (GP) emulator, based on a small number of climate model runs. History matching (HM), the calibration process of removing parameter space for which computer model outputs are inconsistent with the observations, is combined with an emulator. The remaining subset of parameter space is termed the Not Ruled Out Yet (NROY). A weakly stationary GP with a covariance function that depends on the distance between two input points is the principal tool in UQ. However, the stationarity assumption is inadequate when we operate with a heterogeneous model response. In this thesis, we develop diagnostic-led nonstationary GP emulators with a kernel mixture. We employ diagnostics from a stationary GP fit to identify input regions with distinct model behaviour and obtain mixing functions for a kernel mixture. The result is a continuous emulator in parameter space that adapts to changes in model response behaviour. History matching has proven to be more effective when performed in waves. At each wave of HM, a new ensemble is obtained to update an emulator before finding an NROY space. In this thesis, we propose a Bayesian experimental design with a loss function that compares the volume of the NROY space obtained with an updated emulator to the volume of the “true” NROY space obtained using a “perfect” emulator. We combine Bayesian Design Criterion with our proposed nonstationary GP emulator to perform calibration of climate model

    The 60 GHz IMPATT diode development

    Get PDF
    The objective is to develop 60 GHz IMPATT diodes suitable for communications applications. The performance goals of the 60 GHz IMPATT is 1W CW output power with a conversion efficiency of 15 percent and 10-year lifetime. The final design of the 60 GHz IMPATT structure evolved from computer simulations performed at the University of Michigan. The initial doping profile, involving a hybrid double-drift (HDD) design, was derived from a drift-diffusion model that used the static velocity-field characteristics for GaAs. Unfortunately, the model did not consider the effects of velocity undershoot and delay of the avalanche process due to energy relaxation. Consequently, the initial devices were oscillating at a much lower frequency than anticipated. With a revised simulation program that included the two effects given above, a second HDD profile was generated and was used as a basis for fabrication efforts. In the area of device fabrication, significant progress was made in epitaxial growth and characterization, wafer processing, and die assembly. The organo-metallic chemical vapor deposition (OMCVD) was used. Starting with a baseline X-Band IMPATT technology, appropriate processing steps were modified to satisfy the device requirements at V-Band. In terms of efficiency and reliability, the device requirements dictate a reduction in its series resistance and thermal resistance values. Qualitatively, researchers were able to reduce the diodes' series resistance by reducing the thickness of the N+ GaAs substrate used in its fabrication

    Bayesian Design in Clinical Trials

    Get PDF
    In the last decade, the number of clinical trials using Bayesian methods has grown dramatically. Nowadays, regulatory authorities appear to be more receptive to Bayesian methods than ever. The Bayesian methodology is well suited to address the issues arising in the planning, analysis, and conduct of clinical trials. Due to their flexibility, Bayesian design methods based on the accrued data of ongoing trials have been recommended by both the US Food and Drug Administration and the European Medicines Agency for dose-response trials in early clinical development. A distinctive feature of the Bayesian approach is its ability to deal with external information, such as historical data, findings from previous studies and expert opinions, through prior elicitation. In fact, it provides a framework for embedding and handling the variability of auxiliary information within the planning and analysis of the study. A growing body of literature examines the use of historical data to augment newly collected data, especially in clinical trials where patients are difficult to recruit, which is the case for rare diseases, for example. Many works explore how this can be done properly, since using historical data has been recognized as less controversial than eliciting prior information from experts’ opinions. In this book, applications of Bayesian design in the planning and analysis of clinical trials are introduced, along with methodological contributions to specific topics of Bayesian statistics. Finally, two reviews regarding the state-of-the-art of the Bayesian approach in clinical field trials are presented

    Bayesian optimisation for likelihood-free cosmological inference

    Full text link
    Many cosmological models have only a finite number of parameters of interest, but a very expensive data-generating process and an intractable likelihood function. We address the problem of performing likelihood-free Bayesian inference from such black-box simulation-based models, under the constraint of a very limited simulation budget (typically a few thousand). To do so, we adopt an approach based on the likelihood of an alternative parametric model. Conventional approaches to approximate Bayesian computation such as likelihood-free rejection sampling are impractical for the considered problem, due to the lack of knowledge about how the parameters affect the discrepancy between observed and simulated data. As a response, we make use of a strategy previously developed in the machine learning literature (Bayesian optimisation for likelihood-free inference, BOLFI), which combines Gaussian process regression of the discrepancy to build a surrogate surface with Bayesian optimisation to actively acquire training data. We extend the method by deriving an acquisition function tailored for the purpose of minimising the expected uncertainty in the approximate posterior density, in the parametric approach. The resulting algorithm is applied to the problems of summarising Gaussian signals and inferring cosmological parameters from the Joint Lightcurve Analysis supernovae data. We show that the number of required simulations is reduced by several orders of magnitude, and that the proposed acquisition function produces more accurate posterior approximations, as compared to common strategies.Comment: 16+9 pages, 12 figures. Matches PRD published version after minor modification

    Adaptive swarm optimisation assisted surrogate model for pipeline leak detection and characterisation.

    Get PDF
    Pipelines are often subject to leakage due to ageing, corrosion and weld defects. It is difficult to avoid pipeline leakage as the sources of leaks are diverse. Various pipeline leakage detection methods, including fibre optic, pressure point analysis and numerical modelling, have been proposed during the last decades. One major issue of these methods is distinguishing the leak signal without giving false alarms. Considering that the data obtained by these traditional methods are digital in nature, the machine learning model has been adopted to improve the accuracy of pipeline leakage detection. However, most of these methods rely on a large training dataset for accurate training models. It is difficult to obtain experimental data for accurate model training. Some of the reasons include the huge cost of an experimental setup for data collection to cover all possible scenarios, poor accessibility to the remote pipeline, and labour-intensive experiments. Moreover, datasets constructed from data acquired in laboratory or field tests are usually imbalanced, as leakage data samples are generated from artificial leaks. Computational fluid dynamics (CFD) offers the benefits of providing detailed and accurate pipeline leakage modelling, which may be difficult to obtain experimentally or with the aid of analytical approach. However, CFD simulation is typically time-consuming and computationally expensive, limiting its pertinence in real-time applications. In order to alleviate the high computational cost of CFD modelling, this study proposed a novel data sampling optimisation algorithm, called Adaptive Particle Swarm Optimisation Assisted Surrogate Model (PSOASM), to systematically select simulation scenarios for simulation in an adaptive and optimised manner. The algorithm was designed to place a new sample in a poorly sampled region or regions in parameter space of parametrised leakage scenarios, which the uniform sampling methods may easily miss. This was achieved using two criteria: population density of the training dataset and model prediction fitness value. The model prediction fitness value was used to enhance the global exploration capability of the surrogate model, while the population density of training data samples is beneficial to the local accuracy of the surrogate model. The proposed PSOASM was compared with four conventional sequential sampling approaches and tested on six commonly used benchmark functions in the literature. Different machine learning algorithms are explored with the developed model. The effect of the initial sample size on surrogate model performance was evaluated. Next, pipeline leakage detection analysis - with much emphasis on a multiphase flow system - was investigated in order to find the flow field parameters that provide pertinent indicators in pipeline leakage detection and characterisation. Plausible leak scenarios which may occur in the field were performed for the gas-liquid pipeline using a three-dimensional RANS CFD model. The perturbation of the pertinent flow field indicators for different leak scenarios is reported, which is expected to help in improving the understanding of multiphase flow behaviour induced by leaks. The results of the simulations were validated against the latest experimental and numerical data reported in the literature. The proposed surrogate model was later applied to pipeline leak detection and characterisation. The CFD modelling results showed that fluid flow parameters are pertinent indicators in pipeline leak detection. It was observed that upstream pipeline pressure could serve as a critical indicator for detecting leakage, even if the leak size is small. In contrast, the downstream flow rate is a dominant leakage indicator if the flow rate monitoring is chosen for leak detection. The results also reveal that when two leaks of different sizes co-occur in a single pipe, detecting the small leak becomes difficult if its size is below 25% of the large leak size. However, in the event of a double leak with equal dimensions, the leak closer to the pipe upstream is easier to detect. The results from all the analyses demonstrate the PSOASM algorithm's superiority over the well-known sequential sampling schemes employed for evaluation. The test results show that the PSOASM algorithm can be applied for pipeline leak detection with limited training datasets and provides a general framework for improving computational efficiency using adaptive surrogate modelling in various real-life applications

    PREDICTIVE MATURITY OF INEXACT AND UNCERTAIN STRONGLY COUPLED NUMERICAL MODELS

    Get PDF
    The Computer simulations are commonly used to predict the response of complex systems in many branches of engineering and science. These computer simulations involve the theoretical foundation, numerical modeling and supporting experimental data, all of which contain their associated errors. Furthermore, real-world problems are generally complex in nature, in which each phenomenon is described by the respective constituent models representing different physics and/or scales. The interactions between such constituents are typically complex in nature, such that the outputs of a particular constituent may be the inputs for one or more constituents. Thus, the natural question then arises concerning the validity of these complex computer model predictions, especially in cases where these models are executed in support of high-consequence decision making. The overall accuracy and precision of the coupled system is then determined by the accuracy and precision of both the constituents and the coupling interface. Each constituent model has its own uncertainty and bias error. Furthermore, the coupling interface also brings in a similar spectrum of uncertainties and bias errors due to unavoidably inexact and incomplete data transfer between the constituents. This dissertation contributes to the established knowledge of partitioned analysis by investigating the numerical uncertainties, validation and uncertainty quantification of strongly coupled inexact and uncertain models. The importance of this study lies in the urgent need for gaining a better understanding of the simulations of coupled systems, such as those in multi-scale and multi-physics applications, and to identify the limitations due to uncertainty and bias errors in these models

    HETEROGENEOUS UNCERTAINTY QUANTIFICATION FOR RELIABILITY-BASED DESIGN OPTIMIZATION

    Get PDF
    Uncertainty is inherent to real-world engineering systems, and reliability analysis aims at quantitatively measuring the probability that engineering systems successfully perform the intended functionalities under various sources of uncertainties. In this dissertation, heterogeneous uncertainties including input variation, data uncertainty, simulation model uncertainty, and time-dependent uncertainty have been taken into account in reliability analysis and reliability-based design optimization (RBDO). The input variation inherently exists in practical engineering system and can be characterized by statistical modeling methods. Data uncertainty occurs when surrogate models are constructed to replace the simulations or experiments based on a set of training data, while simulation model uncertainty is introduced when high-fidelity simulation models are built through idealizations and simplifications of real physical processes or systems. Time-dependent uncertainty is involved when considering system or component aging and deterioration. Ensuring a high level of system reliability is one of the critical targets for engineering design, and this dissertation studies effective reliability analysis and reliability-based design optimization (RBDO) techniques to address the challenges of heterogeneous uncertainties. First of all, a novel reliability analysis method is proposed to deal with input randomness and time-dependent uncertainty. An ensemble learning framework is designed by integrating the Long short-term memory (LSTM) and feedforward neural network. Time-series data is utilized to construct a surrogate model for capturing the time-dependent responses with respect to input variables and stochastic processes. Moreover, a RBDO framework with Kriging technique is presented to address the time-dependent uncertainty in design optimization. Limit state functions are transformed into time-independent domain by converting the stochastic processes and time parameter to random variables, and Kriging surrogate models are then built and enhanced by a design-driven adaptive sampling scheme to accurately identify potential instantaneous failure events. Secondly, an equivalent reliability index (ERI) method is proposed for handling both input variations and surrogate model uncertainty in RBDO. To account for the surrogate model uncertainty, a Gaussian mixture model is constructed based on Gaussian process model predictions. To propagate both input variations and surrogate model uncertainty into reliability analysis, the statistical moments of the GMM is utilized for calculating an equivalent reliability index. The sensitivity of ERI with respect to design variables is analytically derived to facilitate the surrogate model-based product design process, lead to reliable optimum solutions. Thirdly, different effective methods are developed to handle the simulation model uncertainty as well as the surrogate model uncertainty. An active resource allocation framework is proposed for accurate reliability analysis using both simulation and experimental data, where a two-phase updating strategy is developed for reducing the computational costs. The framework is further extended for RBDO problems, where multi-fidelity design algorithm is presented to ensure accurate optimum designs while minimizing the computational costs. To account for both the bias terms and unknown parameters in the simulation model, Bayesian inference method is adopted for building a validated surrogate model, and a Bayesian-based mixture modeling method is developed to ensure reliable system designs with the consideration of heterogeneous uncertainties

    Tunable plasmonic resonances in highly porous nano-bamboo Si-Au superlattice-type thin films

    Get PDF
    We report on fabrication of spatially-coherent columnar plasmonic nanostructure superlattice-type thin films with high porosity and strong optical anisotropy using glancing angle deposition. Subsequent and repeated depositions of silicon and gold lead to nanometer-dimension subcolumns with controlled lengths. The superlattice-type columns resemble bamboo structures where smaller column sections of gold form junctions sandwiched between larger silicon column sections ("nano-bamboo"). We perform generalized spectroscopic ellipsometry measurements and finite element method computations to elucidate the strongly anisotropic optical properties of the highly-porous nano-bamboo structures. The occurrence of a strongly localized plasmonic mode with displacement pattern reminiscent of a dark quadrupole mode is observed in the vicinity of the gold subcolumns. We demonstrate tuning of this quadrupole-like mode frequency within the near-infrared spectral range by varying the geometry of the nano-bamboo structure. In addition, coupled-plasmon-like and inter-band transition-like modes occur in the visible and ultra-violet spectral regions, respectively. We elucidate an example for the potential use of the nano-bamboo structures as a highly porous plasmonic sensor with optical read out sensitivity to few parts-per-million solvent levels in water
    corecore