34 research outputs found

    An Efficient Monte Carlo-Based Solver for Thermal Radiation in Participating Media

    Get PDF
    Monte Carlo-based solvers, while well-suited for accurate calculation of complex thermal radiation transport problems in participating media, are often deemed computationally unattractive for use in the solution of real-world problems. The main disadvantage of Monte Carlo (MC) solvers is their slow convergence rate and relatively high computational cost. This work presents a novel approach based on a low-discrepancy sequence (LDS) and is proposed for reducing the error bound of a Monte Carlo-based radiation solver. Sobols sequence – an LDS generated with a bit-by-bit exclusive-or operator – is used to develop a quasi-Monte Carlo (QMC) solver for thermal radiation in this work. Preliminary results for simple radiation problems in participating media show that the QMC-based solver has a lower error than the conventional MC-based solver. At the same time, QMC does not add any significant computational overhead. This essentially leads to a lower computational cost to achieve similar error levels from the QMC-based solver than the MC-based solver for thermal radiation

    Fast, portable, and reliable algorithm for the calculation of Halton numbers

    Get PDF
    AbstractWe give a recursive algorithm closely based on the definition of Halton numbers— reflection in the radical point of the digits of an integer in an arbitrary-base positional notation—which is, unlike the Halton's short algorithm, not at all affected by the round-off error, and much faster than the recent improvement of the Halton's algorithm by Berblinger and Schlier. Some applications of the Halton numbers are discussed

    Approximate flow friction factor: Estimation of the accuracy using Sobol’s Quasi-Random sampling

    Get PDF
    The unknown friction factor from the implicit Colebrook equation cannot be expressed explicitly in an analytical way, and therefore to simplify the calculation, many explicit approximations can be used instead. The accuracy of such approximations should be evaluated only throughout the domain of interest in engineering practice where the number of test points can be chosen in many different ways, using uniform, quasi-uniform, random, and quasi-random patterns. To avoid picking points with undetected errors, a sufficient minimal number of such points should be chosen, and they should be distributed using proper patterns. A properly chosen pattern can minimize the required number of testing points that are sufficient to detect maximums of the error. The ability of the Sobol quasi-random vs. random distribution of testing points to capture the maximal relative error using a sufficiently small number of samples is evaluated. Sobol testing points that are quasi-randomly distributed can cover the domain of interest more evenly, avoiding large gaps. Sobol sequences are quasi-random and are always the same, which allows the exact repetition of scientific results

    Design Optimization of Submerged Jet Nozzles for Enhanced Mixing

    Get PDF
    The purpose of this thesis was to identify the optimal design parameters for a jet nozzle which obtains a local maximum shear stress while maximizing the average shear stress on the floor of a fluid filled system. This research examined how geometric parameters of a jet nozzle, such as the nozzle\u27s angle, height, and orifice, influence the shear stress created on the bottom surface of a tank. Simulations were run using a Computational Fluid Dynamics (CFD) software package to determine shear stress values for a parameterized geometric domain including the jet nozzle. A response surface was created based on the shear stress values obtained from 112 simulated designs. A multi-objective optimization software utilized the response surface to generate designs with the best combination of parameters to achieve maximum shear stress and maximum average shear stress. The optimal configuration of parameters achieved larger shear stress values over a commercially available design

    Continuous approximation schemes for stochastic programs

    Full text link
    One of the main methods for solving stochastic programs is approximation by discretizing the probability distribution. However, discretization may lose differentiability of expectational functionals. The complexity of discrete approximation schemes also increases exponentially as the dimension of the random vector increases. On the other hand, stochastic methods can solve stochastic programs with larger dimensions but their convergence is in the sense of probability one. In this paper, we study the differentiability property of stochastic two-stage programs and discuss continuous approximation methods for stochastic programs. We present several ways to calculate and estimate this derivative. We then design several continuous approximation schemes and study their convergence behavior and implementation. The methods include several types of truncation approximation, lower dimensional approximation and limited basis approximation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44240/1/10479_2005_Article_BF02031698.pd

    Development of a predictive risk model for all-cause mortality in patients with diabetes in Hong Kong

    Get PDF
    Introduction Patients with diabetes mellitus are risk of premature death. In this study, we developed a machine learning-driven predictive risk model for all-cause mortality among patients with type 2 diabetes mellitus using multiparametric approach with data from different domains. Research design and methods This study used territory-wide data of patients with type 2 diabetes attending public hospitals or their associated ambulatory/outpatient facilities in Hong Kong between January 1, 2009 and December 31, 2009. The primary outcome is all-cause mortality. The association of risk variables and all-cause mortality was assessed using Cox proportional hazards models. Machine and deep learning approaches were used to improve overall survival prediction and were evaluated with fivefold cross validation method. Results A total of 273 678 patients (mean age: 65.4±12.7 years, male: 48.2%, median follow-up: 142 (IQR=106–142) months) were included, with 91 155 deaths occurring on follow-up (33.3%; annualized mortality rate: 3.4%/year; 2.7 million patient-years). Multivariate Cox regression found the following significant predictors of all-cause mortality: age, male gender, baseline comorbidities, anemia, mean values of neutrophil-to-lymphocyte ratio, high-density lipoprotein-cholesterol, total cholesterol, triglyceride, HbA1c and fasting blood glucose (FBG), measures of variability of both HbA1c and FBG. The above parameters were incorporated into a score-based predictive risk model that had a c-statistic of 0.73 (95% CI 0.66 to 0.77), which was improved to 0.86 (0.81 to 0.90) and 0.87 (0.84 to 0.91) using random survival forests and deep survival learning models, respectively. Conclusions A multiparametric model incorporating variables from different domains predicted all-cause mortality accurately in type 2 diabetes mellitus. The predictive and modeling capabilities of machine/deep learning survival analysis achieved more accurate predictions

    Stochastic time-changed LĂ©vy processes with their implementation

    Get PDF
    Includes bibliographical references.We focus on the implementation details for Lévy processes and their extension to stochastic volatility models for pricing European vanilla options and exotic options. We calibrated five models to European options on the S&P500 and used the calibrated models to price a cliquet option using Monte Carlo simulation. We provide the algorithms required to value the options when using Lévy processes. We found that these models were able to closely reproduce the market option prices for many strikes and maturities. We also found that the models we studied produced different prices for the cliquet option even though all the models produced the same prices for vanilla options. This highlighted a feature of model uncertainty when valuing a cliquet option. Further research is required to develop tools to understand and manage this model uncertainty. We make a recommendation on how to proceed with this research by studying the cliquet option’s sensitivity to the model parameters

    Variance-based sensitivity analysis: the quest for better estimators and designs between explorativity and economy

    Get PDF
    Variance-based sensitivity indices have established themselves as a reference among practitioners of sensitivity analysis of model outputs. A variance-based sensitivity analysis typically produces the first-order sensitivity indices S_j and the so-called total-effect sensitivity indices T_j for the uncertain factors of the mathematical model under analysis. Computational cost is critical in sensitivity analysis. This cost depends upon the number of model evaluations needed to obtain stable and accurate values of the estimates. While efficient estimation procedures are available for S_j (Tarantola et al., 2006), this availability is less the case for T_j (Iooss and Lemaître, 2015). When estimating these indices, one can either use a sample-based approach whose computational cost depends on the number of factors or use approaches based on meta modelling/emulators (e.g., Gaussian processes). The present work focuses on sample-based estimation procedures for T_j for independent inputs and tests different avenues to achieve an algorithmic improvement over the existing best practices. To improve the exploration of the space of the input factors (design) and the formula to compute the indices (estimator), we propose strategies based on the concepts of economy and explorativity. We then discuss how several existing estimators perform along these characteristics. Numerical results are presented for a set of seven test functions corresponding to different settings (few important factors with low cross-factor interactions, all factors equally important with low cross-factor interactions, and all factors equally important with high cross-factor interactions). We conclude the following from these experiments: a) sample-based approaches based on the use of multiple matrices to enhance the economy are outperformed by designs using fewer matrices but with better explorativity; b) among the latter, asymmetric designs perform the best and outperform symmetric designs having corrective terms for spurious correlations; c) improving on the existing best practices is fraught with difficulties; and d) ameliorating the results comes at the cost of introducing extra design parameters

    Microstructure modeling and crystal plasticity parameter identification for predicting the cyclic mechanical behavior of polycrystalline metals

    Get PDF
    Computational homogenization permits to capture the influence of the microstructure on the cyclic mechanical behavior of polycrystalline metals. In this work we investigate methods to compute Laguerre tessellations as computational cells of polycrystalline microstructures, propose a new method to assign crystallographic orientations to the Laguerre cells and use Bayesian optimization to find suitable parameters for the underlying micromechanical model from macroscopic experiments
    corecore