2,723 research outputs found

    Improving Simulation Efficiency of MCMC for Inverse Modeling of Hydrologic Systems with a Kalman-Inspired Proposal Distribution

    Full text link
    Bayesian analysis is widely used in science and engineering for real-time forecasting, decision making, and to help unravel the processes that explain the observed data. These data are some deterministic and/or stochastic transformations of the underlying parameters. A key task is then to summarize the posterior distribution of these parameters. When models become too difficult to analyze analytically, Monte Carlo methods can be used to approximate the target distribution. Of these, Markov chain Monte Carlo (MCMC) methods are particularly powerful. Such methods generate a random walk through the parameter space and, under strict conditions of reversibility and ergodicity, will successively visit solutions with frequency proportional to the underlying target density. This requires a proposal distribution that generates candidate solutions starting from an arbitrary initial state. The speed of the sampled chains converging to the target distribution deteriorates rapidly, however, with increasing parameter dimensionality. In this paper, we introduce a new proposal distribution that enhances significantly the efficiency of MCMC simulation for highly parameterized models. This proposal distribution exploits the cross-covariance of model parameters, measurements and model outputs, and generates candidate states much alike the analysis step in the Kalman filter. We embed the Kalman-inspired proposal distribution in the DREAM algorithm during burn-in, and present several numerical experiments with complex, high-dimensional or multi-modal target distributions. Results demonstrate that this new proposal distribution can greatly improve simulation efficiency of MCMC. Specifically, we observe a speed-up on the order of 10-30 times for groundwater models with more than one-hundred parameters

    Bayesian inference for multi-environment spatial individual-tree models with additive and full-sib family genetic effects for large forest genetic trials

    Get PDF
    Context: The gain in accuracy of breeding values with the use of single trial spatial analysis is well known in forestry. However, spatial analyses methodology for single forest genetic trials must be adapted for use with combined analyses of forest genetic trials across sites. Aims: This paper extends a methodology for spatial analysis of single forest genetic trial to a multi-environment trial (MET) setting. Methods: A two-stage spatial MET approach using an individual-tree model with additive and full-sib family genetic effects was developed. Dispersion parameters were estimated using Bayesian techniques via Gibbs sampling. The procedure is illustrated using height growth data at age 10 from eight large Tsuga heterophylla (Raf.) Sarg. second-generation full-sib progeny trials from two series established across seven sites in British Columbia (Canada) and on one in Washington (USA). Results: The proposed multi-environment spatial mixed model displayed a consistent reduction of the posterior mean and an increase in the precision of error variances than the model with Sets in Replicates or incomplete block alpha designs. Also, the multi-environment spatial model provided an average increase in the posterior means of the narrow- and broad-sense individual-tree heritabilities (h2N and h2B, respectively). No consistent changes were observed in the posterior means of additive genetic correlations (rAjj'). Conclusion: Although computationally demanding, all dispersion parameters were successfully estimated from the proposed multi-environment spatial individual-tree model using Bayesian techniques via Gibbs sampling. The proposed two-stage spatial MET approach produced better results than the commonly used non-spatial MET analysis.Fil: Cappa, Eduardo Pablo. Instituto Nacional de Tecnología Agropecuaria. Centro de Investigación de Recursos Naturales; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Yanchuk, Alvin D.. British Columbia Forest Service; CanadáFil: Cartwright, Charlie V.. British Columbia Forest Service; Canad

    Stochastic forward and inverse groundwater flow and solute transport modeling

    Get PDF
    Keywords: calibration, inverse modeling, stochastic modeling, nonlinear biodegradation, stochastic-convective, advective-dispersive, travel time, network design, non-Gaussian distribution, multimodal distribution, representers This thesis offers three new approaches that contribute to the ability of groundwater modelers to better account for heterogeneity in physically-based, fully distributed groundwater models. In both forward and inverse settings, this thesis tackles major issues with respect to handling heterogeneity and uncertainty in various situations, and thus extends the ability to correctly and/or effectively deal with heterogeneity to these particular situations. The first method presented in the thesis uses the recently developed advective-dispersive streamtube approach in combination with a one-dimensional traveling wave solution for nonlinear bioreactive transport, to study the interplay between physical heterogeneity, local-scale dispersion and nonlinear biodegradation and gain insight in the long-term asymptotic behavior of solute fronts, in order to deduce (the validity of) upscaling equations. Using the method in synthetic small-scale numerical experiments, it is shown that asymptotic front shapes are neither Fickian nor constant, which raises questions about the current practice of upscaling bioreactive transport. The second method presented in the thesis enhances the management of heterogeneity by extending inverse theory (specifically, the representer-based inverse method) to determinations of groundwater age/travel time. A first-order methodology is proposed to include groundwater age or tracer arrival time determinations in measurement network design. Using the method, it is shown that, in the applied synthetic numerical example, an age estimation network outperforms equally sized head measurement networks and conductivity measurement networks, even if the age estimations are highly uncertain. The study thus suggests a high potential of travel time/groundwater age data to constrain groundwater models. Finally, the thesis extends the applicability of inverse methods to multimodal parameter distributions. Multimodal distributions arise when multiple statistical populations exist within one parameter field, each having different means and/or variances of the parameter of concern. No inverse methods exist that can calibrate multimodal parameter distributions while preserving the geostatistical properties of the various statistical populations. The thesis proposes a method that resolves the difficulties existing inverse methods have with the multimodal distribution. The method is successfully applied to both synthetic and real-world cases. </p

    Multi-Scale Methodologies for Probabilistic Resilience Assessment and Enhancement of Bridges and Transportation Systems

    Get PDF
    When an extreme event occurs, such as an earthquake or a tsunami, the amount of socioeconomic losses due to reduced functionality of infrastructure systems over time is comparable to or even higher than the immediate loss due to the extreme event itself. Therefore, one of the highest priorities of owners, disaster management officials, and decision makers in general is to have a prediction of the disaster performance of lifelines and infrastructures a priory considering different scenarios, and be able to restore the functionality in an efficient manner to the normal condition, or at least to an acceptable level during the emergency, in the aftermath of a catastrophe. Along the line of this need, academic research has been focused on the concept of infrastructure resilience, which reflects the ability of structures, infrastructure systems, and communities to both withstand against and quickly recover functionality after an extreme event. Among infrastructure systems, transportation networks are of utmost importance as they allow people to move from damaged to safe areas and rescue/recovery teams to effectively accomplish their mission. Moreover, the functionality and restoration of several other infrastructure systems and socio-economic units of the community is highly interdependent with transportation network performance. Among different components of transportation networks, bridges are among of the most vulnerable and need a particular attention. To this respect, this research is mostly focused on quantification, and optimization of the functionality and resilience of bridges and transportation networks in the aftermath of extreme events, and in particular earthquakes, considering the underlying uncertainties. The scope of the study includes: (i) accurate\efficient assessment of the seismic fragility of individual bridges; (ii) development of a technique for assessment of bridge functionality and its probabilistic characteristics following an earthquake and during the restoration process; (iii) development of efficient optimization techniques for post-event restoration and pre-event retrofit prioritization of bridges; (iv) development of metrics and formulations for realistic quantification of the functionality and resilience of bridges and transportation networks.The evaluation of the damage and its probabilistic characteristics is the first step towards the assessment of the functionality of a bridge. In this regard, a simulation-based methodology was introduced for probabilistic seismic demand and fragility analyses, aimed at improving the accuracy of the resilience and life-cycle loss assessment of highway bridges. The impact of different assumptions made on the demand was assessed to determine if they are acceptable. The results show that among different assumptions, the power model and constant dispersion assumption introduce a considerable amount of error to the estimated probabilistic characteristics of demand and fragility. The error can be prevented using the introduced simulation-based technique, which takes advantage of the computational resources widely available nowadays.A new framework was presented to estimate probabilistic restoration functions of damaged bridges. This was accomplished by simulating different restoration project scenarios, considering the construction methods common in practice and the amount of resource availability. Moreover, two scheduling schemes were proposed to handle the uncertainties in the project scheduling and planning. The application of the proposed methodology was presented for the case of a bridge under a seismic scenario. The results show the critical impact of temporary repair solutions (e.g., temporary shoring) on the probabilistic characteristics of the functionality of the bridge during the restoration. Thus, the consideration of such solutions in probabilistic functionality and resilience analyses of bridges is necessary. Also, a considerable amount of nonlinearity was recognized among the restoration resource availability, duration of the restoration, and the bridge functionality level during the restoration process.A new tool called “Functionality-Fragility Surface” (FFS) was introduced for pre-event probabilistic recovery and resilience prediction of damaged structure, infrastructure systems, and communities. FFS combines fragility and restoration functions and presents the probability of suffering a certain functionality loss after a certain time elapsed from the occurrence of the extreme event, and given the intensity of the event. FFSs were developed for an archetype bridge to showcase the application of the proposed tool and formulation. Regarding network level analysis, a novel evolutionary optimization methodology for scheduling independent tasks considering resource and time constraints was proposed. The application of the proposed methodology to multi-phase optimal resilience restoration of highway bridges was presented and discussed. The results show the superior performance of the presented technique compared to other formulations both in terms of convergence rate and optimality of the solution. Also, the computed resilience-optimal restoration schedules are more practical and easier to interpret. Moreover, new connectivity-based metrics were introduced to measure the functionality and resilience of transportation networks, to take into account the priorities typically considered during the medium term of the disaster management.A two-level simulation-based optimization framework for bridge retrofit prioritization is presented. The objectives of the upper-level optimization are the minimization of the cost of bridge retrofit strategy, and probabilistic resilience failure defined as the probability of post-event optimal resilience being less than a critical value. The combined effect of the uncertainties in the seismic event characteristics and resulting damage state of bridges are taken into account by using an advanced efficient sampling technique, and fragility analysis. The proposed methodology was applied to a transportation network and different optimal bridge retrofit strategies were computed. The technique showed to be effective and efficient in computing the optimal bridge retrofit solutions of the example transportation network

    Identifikation von Schadstoffeinleitungen und angepasstes Design eines Monitoringnetzwerkes in Ă„stuaren

    Get PDF
    In the last decades there have been thousands of accidental pollution spills as well as intentional illegal discharges into surface waters all over the world. The identification of pollution source parameters (e.g. the source location) has often proven difficult and heavily depends on measured pollutant concentration data collected after the incident. This thesis investigates how an adapted monitoring design can improve the identification of source parameters after a spill incident, especially in the case of estuaries. Initially, the effect of the spatial and temporal monitoring design on parameter identifiability is analyzed based on a synthetic unidirectional (river) as well as a bidirectional (estuary) test case is carried out. While the transport processes in the river could be represented by an analytical solution of the 2D advection-dispersion-reaction equation, to take into account the tidal dynamics in the estuary, a numerical transport model had to be set up with the Delft3D software suite. The results of the analysis indicate that parameter dependencies exist between different source parameters, which can weaken the identifiability of the individual parameters. However, an appropriate monitoring design can improve parameter identifiability and consequently lead to more reliable parameter estimates. To identify the source parameters after potential pollution incidents, two optimization approaches were selected in this work, which were initially applied to the synthetic bidirectional test case. Both approaches achieved very good results for both perfect and noise perturbed monitoring data. Subsequently, both optimization approaches were transferred to a real-world estuary, the Thi Vai Estuary, located in South Vietnam. To simulate pollution scenarios, a 2D hydrodynamic transport model was set up in Delft3D and calibrated based on monitoring data collected in the EWATEC-COAST research project. The synthetically generated monitoring data of an optimized monitoring network were then used to identify several theoretical spill incidents in the Thi Vai Estuary. Both optimization approaches performed generally well and could correctly identify the source parameters in 80% of the considered scenarios.In den letzten Jahrzehnten kam es weltweit immer wieder zu zahlreichen Unfällen und illegalen Einleitungen, bei denen Schadstoffe in Oberflächengewässer eingeleitet wurden. Die Identifikation der Einleitungsparameter (u.a. des Ortes) stellt hierbei eine große Herausforderung dar und hängt stark von den gesammelten Konzentrationsdaten ab, die nach dem Schadstoffeintrag erhoben wurden. Daher bestand das Hauptziel der Dissertation darin, die Identifikation der Einleitungsparameter im Falle eines Schadstoffeintrags durch ein angepasstes Monitoringdesign insbesondere in Ästuaren zu verbessern. Zunächst wurde, aufbauend auf einen synthetischen Fluss- und Ästuarabschnitt, der Einfluss des räumlichen und zeitlichen Monitoringdesigns auf die Identifizierbarkeit der Einleitungsparameter analysiert. Während die Transportprozesse im Fluss durch eine analytische Lösung der 2D Advektions-Dispersions-Reaktions-Gleichung abgebildet werden konnten, musste für das Ästuar zur Berücksichtigung des Tideeinflusses ein numerisches Transportmodell mit der Software Delft3D aufgebaut werden. Die Ergebnisse der Analyse zeigen, dass zwischen bestimmten Einleitungsparametern Interaktionen bestehen, die die Identifizierbarkeit der einzelnen Parameter schwächen. Ein angepasstes Monitoringdesign kann die Identifizierbarkeit allerdings verbessern und folglich zu einer zuverlässigeren Parameterschätzung führen. Zur Identifikation der Einleitungsparameter nach potentiellen Schadstoffeinträgen wurden in dieser Arbeit zwei verschiedene Optimierungsansätze ausgewählt, die zunächst auf den synthetischen Ästuarabschnitt angewandt wurden. Hier konnten durch beide Ansätze sowohl für perfekte als auch fehlerbehaftete Messdaten sehr gute Ergebnisse erzielt werden. Anschließend wurden beide Optimierungsansätze auf einen realen Ästuar, den Thi Vai Ästuar in Südvietnam übertragen. Zur Simulation verschiedener Einleitungsszenarien wurde ein 2D hydrodynamisches Transportmodell in Delft3D aufgebaut und mit Messdaten, die im Forschungsprojekt EWATEC-COAST erhoben wurden, kalibriert. Die synthetisch generierten Monitoringdaten eines optimalen Monitoringnetzwerkes dienten anschließend zur Identifikation mehrerer theoretischer Einleitungsszenarien. Beide Optimierungsansätze zeigten gute Ergebnisse und konnten die Einleitungsparameter in 80% der betrachteten Szenarien korrekt bestimmen

    Incorporating Physics-Based Patterns into Geophysical and Geostatistical Estimation Algorithms

    Get PDF
    Geophysical imaging systems are inherently non-linear and plagued with the challenge of limited data. These drawbacks make the solution non-unique and sensitive to small data perturbations; hence, regularization is performed to stabilize the solution. Regularization involves the application of a priori specification of the target to modify the solution space in order to make it tractable. However, the traditionally applied regularization model constraints are independent of the physical mechanisms driving the spatiotemporal evolution of the target parameters. To address this limitation, we introduce an innovative inversion scheme, basis-constrained inversion, which seeks to leverage advances in mechanistic modeling of physical phenomena to mimic the physics of the target process, to be incorporated into the regularization of hydrogeophysical and geostatistical estimation algorithms, for improved subsurface characterization. The fundamental protocol of the approach involves the construction of basis vectors from training images, which are then utilized to constrain the optimization problem. The training dataset is generated via Monte Carlo simulations to mimic the perceived physics of the processes prevailing within the system of interest. Two statistical techniques for constructing optimal basis functions, Proper Orthogonal Decomposition (POD) and Maximum Covariance Analysis (MCA), are employed leading to two inversion schemes. While POD is a static imaging technique, MCA is a dynamic inversion strategy. The efficacies of the proposed methodologies are demonstrated based on hypothetical and lab-scale flow and transport experiments

    Internationales Kolloquium über Anwendungen der Informatik und Mathematik in Architektur und Bauwesen : 20. bis 22.7. 2015, Bauhaus-Universität Weimar

    Get PDF
    The 20th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 20th till 22nd July 2015. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference. We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference

    Physics-based and Data-driven Methods with Compact Computing Emphasis for Structural Health Monitoring

    Get PDF
    This doctoral dissertation contributes to both model-based and model-free data interpretation techniques in vibration-based Structural Health Monitoring (SHM). In the model-based category, a surrogate-based finite element (FE) model updating algorithm is developed to improve the computational efficiency by replacing the FE model with Response Surface (RS) polynomial models in the optimization problem of model calibration. In addition, formulation of the problem in an iterative format in time domain is proposed to extract more information from measured signals and compensate for the error present in the regressed RS models. This methodology is applied to a numerical case study of a steel frame with global nonlinearity. Its performance in presence of measurement noise is compared with a method based on sensitivity analysis and it is observed that while having comparable accuracy, proposed method outperforms the sensitivity-based model updating procedure in terms of required time. With the assumption of Gaussian measurement noise, it is also shown that this parameter estimation technique has low sensitivity to the standard deviation of the measurement noise. This is validated through several parametric sensitivity studies performed on numerical simulations of nonlinear systems with single and multiple degrees of freedom. The results show the least sensitivity to measurement noise level, selected time window for model updating, and location of the true model parameters in RS regression domain, when vibration frequency of the system is outside the frequency bandwidth of the load. Further application of this method is also presented through a case study of a steel frame with bilinear material model under seismic loading. The results indicate the robustness of this parameter estimation technique for different cases of input excitation, measurement noise level, and true model parametersIn the model-free category, this dissertation presents data-driven damage identification and localization methods based on two-sample control statistics as well as damage-sensitive features to be extracted from single- and multivariate regression models. For this purpose, sequential normalized likelihood ratio test and two-sample t-test are adopted to detect the change in two families of damage features based on the coefficients of four different linear regression models. The performance of combinations of these damage features, regression models and control statistics are compared through a scaled two-bay steel frame instrumented with a dense sensor network and excited by impact loading. It is shown that the presented methodologies are successful in detecting the timing and location of the structural damage, while having acceptable false detection quality. In addition, it is observed that incorporating multiple mathematical models, damage-sensitive features and change detection tests improve the overall performance of these model-free vibration-based structural damage detection procedures. In order to extend the scalability of the presented data-driven damage detection methods, a compressed sensing damage localization algorithm is also proposed. The objective is accurate damage localization in a structural component instrumented with a dense sensor network, by processing data only from a subset of sensors. In this method, first a set of sensors from the network are randomly sampled. Measurements from these sampled sensors are processed to extract damage sensitive features. These features undergo statistical change point analysis to establish a new boundary for a local search of damage location. As the local search proceeds, probability of the damage location is estimated through a Bayesian procedure with a bivariate Gaussian likelihood model. The decision boundary and the posterior probability of the damage location are updated as new sensors are added to processing subset and more information about location of damage becomes available. This procedure is continued until enough evidence is collected to infer about damage location. Performance of this method is evaluated using a FE model of a cracked gusset plate connection. Pre- and post-damage strain distributions in the plate are used for damage diagnosis.Lastly, through study of potential causes of damage to the Washington Monument during the 2011 Virginia earthquake, this dissertation demonstrates the role that SHM techniques plays in improving the credibility of damage assessment and fragility analysis of the constructed structures. An FE model of the Washington Monument is developed and updated based on the dynamic characteristics of the structure identified through ambient vibration measurement. The calibrated model is used to study the behavior of the Monument during 2011 Virginia earthquake. This FE model is then modified to limit the tensile capacity of the grout material and previously cracked sections to investigate the initiation and propagation of cracking in several futuristic earthquake scenarios. The nonlinear FE model is subjected to two ensembles of site-compatible ground motions representing different seismic hazard levels for the Washington Monument, and occurrence probability of several structural and non-structural damage states is investigated

    Seismic Waves

    Get PDF
    The importance of seismic wave research lies not only in our ability to understand and predict earthquakes and tsunamis, it also reveals information on the Earth's composition and features in much the same way as it led to the discovery of Mohorovicic's discontinuity. As our theoretical understanding of the physics behind seismic waves has grown, physical and numerical modeling have greatly advanced and now augment applied seismology for better prediction and engineering practices. This has led to some novel applications such as using artificially-induced shocks for exploration of the Earth's subsurface and seismic stimulation for increasing the productivity of oil wells. This book demonstrates the latest techniques and advances in seismic wave analysis from theoretical approach, data acquisition and interpretation, to analyses and numerical simulations, as well as research applications. A review process was conducted in cooperation with sincere support by Drs. Hiroshi Takenaka, Yoshio Murai, Jun Matsushima, and Genti Toyokuni
    • …
    corecore