233,644 research outputs found
Risk Analysis and Reliability Improvement of Mechanistic-Empirical Pavement Design
Reliability used in the Mechanistic Empirical Pavement Design Guide (MEPDG) is a congregated indicator defined as the probability that each of the key distress types and smoothness will be less than a selected critical level over the design period. For such a complex system as the MEPDG which does not have closed-form design equations, classic reliability methods are not applicable. A robust reliability analysis can rely on Monte Carlo Simulation (MCS). The ultimate goal of this study was to improve the reliability model of the MEPDG using surrogate modeling techniques and Monte Carlo simulation.
To achieve this goal, four tasks were accomplished in this research. First, local calibration using 38 pavement sections was completed to reduce the system bias and dispersion of the nationally calibrated MEPDG. Second, uncertainty and risk in the MEPDG were identified using Hierarchical Holographic Modeling (HHM). To determine the critical factors affecting pavement performance, this study applied not only the traditional sensitivity analysis method but also the risk assessment method using the Analytic Hierarchy Process (AHP). Third, response surface models were built to provide a rapid solution of distress prediction for alligator cracking, rutting and smoothness. Fourth, a new reliability model based on Monte Carlo Simulation was proposed. Using surrogate models, 10,000 Monte Carlo simulations were calculated in minutes to develop the output ensemble, on which the predicted distresses at any reliability level were readily available. The method including all data and algorithms was packed in a user friendly software tool named ReliME.
Comparison between the AASHTO 1993 Guide, the MEPDG and ReliME was presented in three case studies. It was found that the smoothness model in MEPDG had an extremely high level of variation. The product from this study was a consistent reliability model specific to local conditions, construction practices and specifications. This framework also presented the feasibility of adopting Monte Carlo Simulation for reliability analysis in future mechanistic empirical pavement design software
Modeling of Pressure and Temperature Profiles for the Flow of CO2 through a Restriction
CO2 emissions due to massive industrialization have led to several environmental issues. Carbon Capture and Storage (CCS) is one of the most important technologies that can be used to reduce anthropogenic CO2 emissions worldwide. CCS projects mainly involve three processes: carbon capture, transportation, and storage. In the transportation process, the modeling of the flow in pipelines and the relationships between pressure, flow velocity, temperature, density, and phase stability are of significance. Orifice plates are a common tool used for flowrate measurements. Several standards provide the specifications and implementation approach for this type of equipment item in pipelines. In this study, flow through an orifice plate is simulated with computational fluid dynamics (CFD) modeling software, namely ANSYSÂź, to obtain fluid pressure, velocity, and temperature profiles. Model geometry and fluid properties are defined that are suitable for making a comparison with ISO-5167 empirical correlations and a similar reference study, which are used to validate the simulation results. A mesh sensitivity analysis is conducted to ensure the correctness of the results. A reasonable agreement is found between the simulation results, empirical correlations, and previous studies. The Joule Thompson cooling effect is also considered in this work for high-pressure CO2 cases, and the plots showed good agreement with reference studies
A framework for the simulation of structural software evolution
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2008 ACM.As functionality is added to an aging piece of software, its original design and structure will tend to erode. This can lead to high coupling, low cohesion and other undesirable effects associated with spaghetti architectures. The underlying forces that cause such degradation have been the subject of much research. However, progress in this field is slow, as its complexity makes it difficult to isolate the causal flows leading to these effects. This is further complicated by the difficulty of generating enough empirical data, in sufficient quantity, and attributing such data to specific points in the causal chain. This article describes a framework for simulating the structural evolution of software. A complete simulation model is built by incrementally adding modules to the framework, each of which contributes an individual evolutionary effect. These effects are then combined to form a multifaceted simulation that evolves a fictitious code base in a manner approximating real-world behavior. We describe the underlying principles and structures of our framework from a theoretical and user perspective; a validation of a simple set of evolutionary parameters is then provided and three empirical software studies generated from open-source software (OSS) are used to support claims and generated results. The research illustrates how simulation can be used to investigate a complex and under-researched area of the development cycle. It also shows the value of incorporating certain human traits into a simulationâfactors that, in real-world system development, can significantly influence evolutionary structures
Comparison of Gaussian process modeling software
Gaussian process fitting, or kriging, is often used to create a model from a
set of data. Many available software packages do this, but we show that very
different results can be obtained from different packages even when using the
same data and model. We describe the parameterization, features, and
optimization used by eight different fitting packages that run on four
different platforms. We then compare these eight packages using various data
functions and data sets, revealing that there are stark differences between the
packages. In addition to comparing the prediction accuracy, the predictive
variance--which is important for evaluating precision of predictions and is
often used in stopping criteria--is also evaluated
Physics-related epistemic uncertainties in proton depth dose simulation
A set of physics models and parameters pertaining to the simulation of proton
energy deposition in matter are evaluated in the energy range up to
approximately 65 MeV, based on their implementations in the Geant4 toolkit. The
analysis assesses several features of the models and the impact of their
associated epistemic uncertainties, i.e. uncertainties due to lack of
knowledge, on the simulation results. Possible systematic effects deriving from
uncertainties of this kind are highlighted; their relevance in relation to the
application environment and different experimental requirements are discussed,
with emphasis on the simulation of radiotherapy set-ups. By documenting
quantitatively the features of a wide set of simulation models and the related
intrinsic uncertainties affecting the simulation results, this analysis
provides guidance regarding the use of the concerned simulation tools in
experimental applications; it also provides indications for further
experimental measurements addressing the sources of such uncertainties.Comment: To be published in IEEE Trans. Nucl. Sc
Combining different validation techniques for continuous software improvement - Implications in the development of TRNSYS 16
Validation using published, high quality test suites can serve to identify different problems in simulation software: modeling and coding errors, missing features, frequent sources of user confusion. This paper discusses the application of different published validation procedures during the development of a new TRNSYS version: BESTEST/ASHRAE 140 (Building envelope), HVAC BESTEST (mechanical systems) and IEA ECBCS Annex 21 / SHC Task 12 empirical validation (performance of a test cell with a very simple mechanical system). It is shown that each validation suite has allowed to identify different types of problems. Those validation tools were also used to diagnose and fix some of the identified problems, and to assess the influence of code modifications. The paper also discusses some limitations of the selected validation tools
Model-driven performance evaluation for service engineering
Service engineering and service-oriented architecture as an
integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of
measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised
for the empirical performance evaluation of model-driven developed service systems
âAn ethnographic seductionâ: how qualitative research and Agent-based models can benefit each other
We provide a general analytical framework for empirically informed agent-based simulations. This methodology provides present-day agent-based models with a sound and proper insight as to the behavior of social agents â an insight that statistical data often fall short of providing at least at a micro level and for hidden and sensitive populations. In the other direction, simulations can provide qualitative researchers in sociology, anthropology and other fields with valuable tools for: (a) testing the consistency and pushing the boundaries, of specific theoretical frameworks; (b) replicating and generalizing results; (c) providing a platform for cross-disciplinary validation of results
- âŠ