113,959 research outputs found

    Empirical validation of models to compute solar irradiance on inclined surfaces for building energy simulation

    Get PDF
    Accurately computing solar irradiance on external facades is a prerequisite for reliably predicting thermal behavior and cooling loads of buildings. Validation of radiation models and algorithms implemented in building energy simulation codes is an essential endeavor for evaluating solar gain models. Seven solar radiation models implemented in four building energy simulation codes were investigated: (1) isotropic sky, (2) Klucher, (3) Hay-Davies, (4) Reindl, (5) Muneer, (6) 1987 Perez, and (7) 1990 Perez models. The building energy simulation codes included: EnergyPlus, DOE-2.1E, TRNSYS-TUD, and ESP-r. Solar radiation data from two 25 days periods in October and March/April, which included diverse atmospheric conditions and solar altitudes, measured on the EMPA campus in a suburban area in Duebendorf, Switzerland, were used for validation purposes. Two of the three measured components of solar irradiances - global horizontal, diffuse horizontal and direct-normal - were used as inputs for calculating global irradiance on a south-west façade. Numerous statistical parameters were employed to analyze hourly measured and predicted global vertical irradiances. Mean absolute differences for both periods were found to be: (1) 13.7% and 14.9% for the isotropic sky model, (2) 9.1% for the Hay-Davies model, (3) 9.4% for the Reindl model, (4) 7.6% for the Muneer model, (5) 13.2% for the Klucher model, (6) 9.0%, 7.7%, 6.6%, and 7.1% for the 1990 Perez models, and (7) 7.9% for the 1987 Perez model. Detailed sensitivity analyses using Monte Carlo and fitted effects for N-way factorial analyses were applied to assess how uncertainties in input parameters propagated through one of the building energy simulation codes and impacted the output parameter. The implications of deviations in computed solar irradiances on predicted thermal behavior and cooling load of buildings are discussed

    Emulating dynamic non-linear simulators using Gaussian processes

    Get PDF
    The dynamic emulation of non-linear deterministic computer codes where the output is a time series, possibly multivariate, is examined. Such computer models simulate the evolution of some real-world phenomenon over time, for example models of the climate or the functioning of the human brain. The models we are interested in are highly non-linear and exhibit tipping points, bifurcations and chaotic behaviour. However, each simulation run could be too time-consuming to perform analyses that require many runs, including quantifying the variation in model output with respect to changes in the inputs. Therefore, Gaussian process emulators are used to approximate the output of the code. To do this, the flow map of the system under study is emulated over a short time period. Then, it is used in an iterative way to predict the whole time series. A number of ways are proposed to take into account the uncertainty of inputs to the emulators, after fixed initial conditions, and the correlation between them through the time series. The methodology is illustrated with two examples: the highly non-linear dynamical systems described by the Lorenz and Van der Pol equations. In both cases, the predictive performance is relatively high and the measure of uncertainty provided by the method reflects the extent of predictability in each system

    Bounding rare event probabilities in computer experiments

    Full text link
    We are interested in bounding probabilities of rare events in the context of computer experiments. These rare events depend on the output of a physical model with random input variables. Since the model is only known through an expensive black box function, standard efficient Monte Carlo methods designed for rare events cannot be used. We then propose a strategy to deal with this difficulty based on importance sampling methods. This proposal relies on Kriging metamodeling and is able to achieve sharp upper confidence bounds on the rare event probabilities. The variability due to the Kriging metamodeling step is properly taken into account. The proposed methodology is applied to a toy example and compared to more standard Bayesian bounds. Finally, a challenging real case study is analyzed. It consists of finding an upper bound of the probability that the trajectory of an airborne load will collide with the aircraft that has released it.Comment: 21 pages, 6 figure

    BEAST: Bayesian evolutionary analysis by sampling trees

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The evolutionary analysis of molecular sequence variation is a statistical enterprise. This is reflected in the increased use of probabilistic models for phylogenetic inference, multiple sequence alignment, and molecular population genetics. Here we present BEAST: a fast, flexible software architecture for Bayesian analysis of molecular sequences related by an evolutionary tree. A large number of popular stochastic models of sequence evolution are provided and tree-based models suitable for both within- and between-species sequence data are implemented.</p> <p>Results</p> <p>BEAST version 1.4.6 consists of 81000 lines of Java source code, 779 classes and 81 packages. It provides models for DNA and protein sequence evolution, highly parametric coalescent analysis, relaxed clock phylogenetics, non-contemporaneous sequence data, statistical alignment and a wide range of options for prior distributions. BEAST source code is object-oriented, modular in design and freely available at <url>http://beast-mcmc.googlecode.com/</url> under the GNU LGPL license.</p> <p>Conclusion</p> <p>BEAST is a powerful and flexible evolutionary analysis package for molecular sequence variation. It also provides a resource for the further development of new models and statistical methods of evolutionary analysis.</p

    Knowledge Management: A Discovery Process

    Get PDF
    Getting strategic about how you organize and redistribute knowledge can help just about anyone achieve their goals more efficiently. We at The McKnight Foundation often find ourselves at the center of meaty, data-rich, analytic conversations. This case study summarizes our yearlong exploration and planning to consume, organize, and share knowledge better

    Statistical emulation of a tsunami model for sensitivity analysis and uncertainty quantification

    Get PDF
    Due to the catastrophic consequences of tsunamis, early warnings need to be issued quickly in order to mitigate the hazard. Additionally, there is a need to represent the uncertainty in the predictions of tsunami characteristics corresponding to the uncertain trigger features (e.g. either position, shape and speed of a landslide, or sea floor deformation associated with an earthquake). Unfortunately, computer models are expensive to run. This leads to significant delays in predictions and makes the uncertainty quantification impractical. Statistical emulators run almost instantaneously and may represent well the outputs of the computer model. In this paper, we use the Outer Product Emulator to build a fast statistical surrogate of a landslide-generated tsunami computer model. This Bayesian framework enables us to build the emulator by combining prior knowledge of the computer model properties with a few carefully chosen model evaluations. The good performance of the emulator is validated using the Leave-One-Out method
    corecore