117 research outputs found

    Interval field methods with local gradient control

    Get PDF
    This paper introduces a novel method to create an interval field based on measurement data. Such interval fields are typically used to describe a spatially distributed non-deterministic quantity, e.g., Young's modulus. The interval field is based on a number of measurement points, i.e., control points, expended throughout the domain by a set of basis functions. At the control point the non-deterministic quantity is known and bounded by an interval. However, at these measurement points information about the gradients might also be available. In addition, the non-deterministic quantity might be described better by estimating the gradients based on the other measurements. Hence, the proposed interval field method allows to incorporate this gradient information. The method is based on Inverse Distance Weighing (IDW) with an additional set of basis functions: one set of basis functions interpolates the value, and the second set of basis functions controls the gradient at the control points. The additional basis functions can be determined in two distinct ways: first, the gradients are available or can directly be measured at the control point, and second, a weighted average is taken with respect to all control points within the domain. In general, the proposed interval field provides a more versatile definition of an interval field compared to the standard implementation of inverse distance weighting. The application of the interval field is shown in a number of one-dimensional cases where a comparison with standard inverse distance weighting is made. In addition, a case study with a set of measurement data is used to illustrate the method and how different realisations are obtained

    Comparison of Bayesian and interval uncertainty quantification : application to the AIRMOD test structure

    Get PDF
    This paper concerns the comparison of two inverse methods for the quantification of uncertain model parameters, based on experimentally obtained measurement data of the model's responses. Specifically, Bayesian inference is compared to a novel method for the quantification of multivariate interval uncertainty. The comparison is made by applying both methods to the AIRMOD measurement data set, and comparing their results critically in terms of obtained information and computational expense. Since computational cost of the application of both methods to high-dimensional problems and realistic numerical models can become intractable, an Artificial Neural Network surrogate is used for both methods. The application of this ANN proves to limit the computational cost to a large extent, even taking the generation of the training dataset into account. Concerning the comparison of both methods, it is found that the results of the Bayesian identification provide less over-conservative bounds on the uncertainty in the responses of the AIRMOD model

    Partially Bayesian active learning cubature for structural reliability analysis with extremely small failure probabilities

    Get PDF
    The Bayesian failure probability inference (BFPI) framework provides a well-established Bayesian approach to quantifying our epistemic uncertainty about the failure probability resulting from a limited number of performance function evaluations. However, it is still challenging to perform Bayesian active learning of the failure probability by taking advantage of the BFPI framework. In this work, three Bayesian active learning methods are proposed under the name ‘partially Bayesian active learning cubature’ (PBALC), based on a cleaver use of the BFPI framework for structural reliability analysis, especially when small failure probabilities are involved. Since the posterior variance of the failure probability is computationally expensive to evaluate, the underlying idea is to exploit only the posterior mean of the failure probability to design two critical components for Bayesian active learning, i.e., the stopping criterion and the learning function. On this basis, three sets of stopping criteria and learning functions are proposed, resulting in the three proposed methods PBALC1, PBALC2 and PBALC3. Furthermore, the analytically intractable integrals involved in the stopping criteria are properly addressed from a numerical point of view. Five numerical examples are studied to demonstrate the performance of the three proposed methods. It is found empirically that the proposed methods can assess very small failure probabilities and significantly outperform several existing methods in terms of accuracy and efficiency

    A material interpolation technique using the simplex polytope

    Get PDF
    The Discrete Material Optimization (DMO) and the Shape Function with Penalization (SFP) constitute the state-of-the-art material interpolation techniques for identifying from a list of pre-defined candidate materials the most suitable one(s) for the structural domain. The candidate materials are represented on this list through their mechanical properties, and are interpolated within the domain of interest (DOI), whether that is the finite element (FE) domain or groups of FEs, so-called patches. Depending on the technique preferred to interpolate the mechanical properties within the DOI, a different type of weights is selected. Goal of the discrete material optimization problem (MOP) is to solve for these weights and determine for each FE/patch a unique material from the list. The current work extends the concept of the SFP technique by employing as weights the shape functions of the hyper-tetrahedral FE, the dimension of which is dynamically adapted depending on the number of candidate materials considered for the structural domain. This generalized hyper-tetrahedral FE constitutes what is defined as a simplex, and similar to the SFP technique each of its nodes is tied to a specific candidate material. In the context of discrete optimization and utilizing the shape functions of an abstract high-dimensional FE as weights for the candidate materials, the proposed interpolation technique secures the continuity between the number of candidate materials that can be considered for the structure, a feature lacking in the SFP technique. Additionally, given that the number of nodes forming the simplex FE is always one unit greater than the dimension of the space it is defined within, the dimension of the resulting MOP drops by one per DOI. The developed material interpolation technique is combined with the topology optimization problem (TOP) to formulate the concurrent material and topology optimization problem for compliance minimization of the structure. Finally, the latter is examined on the academic case study of the 3D Messerchmitt-B¨olkow-Blohm (MBB) beam for the case of the concurrent topology and discrete fiber orientation optimization problem

    Bayesian parameter estimation of ligament properties based on tibio-femoral kinematics during squatting

    Get PDF
    The objective of this study is to estimate the, probably correlated, ligament material properties and attachment sites in a highly non-linear, musculoskeletal knee model based on kinematic data of a knee rig experiment for seven specific specimens. Bayesian parameter estimation is used to account for uncertainty in the limited experimental data by optimization of a high dimensional input parameter space (50 parameters) consistent with all probable solutions. The set of solutions accounts for physiologically relevant ligament strain (ϵ&lt;6%). The transitional Markov Chain Monte Carlo algorithm was used. Alterations to the algorithm were introduced in order to avoid premature convergence. To perform the parameter estimation with feasible computational cost, a surrogate model of the knee model was trained. Results show that there is a large intra- and inter-specimen variability in ligament properties, and that multiple sets of ligament properties fit the experimentally measured tibio-femoral kinematics. Although all parameters were allowed to vary significantly, large interdependence is only found between the reference strain and attachment sites. The large variation between specimens and interdependence between reference strain and attachment sites within one specimen, show the inability to identify a small range of ligament properties representative for the patient population. To limit ligament properties uncertainty in clinical applications, research will need to invest in establishing patient-specific uncertainty ranges and/or accurate in vivo measuring methods of the attachment sites and reference strain and/or alternative (combinations of) movements that would allow identifying a unique solution.</p

    Bounding the first excursion probability of linear structures subjected to imprecise stochastic loading

    Get PDF
    This paper presents a highly efficient and accurate approach to determine the bounds on the first excursion probability of a linear structure that is subjected to an imprecise stochastic load. Traditionally, determining these bounds involves solving a double loop problem, where the aleatory uncertainty has to be fully propagated for each realization of the epistemic uncertainty or vice versa. When considering realistic structures such as buildings, whose numerical models often contain thousands of degrees of freedom, such approach becomes quickly computationally intractable. In this paper, we introduce an approach to decouple this propagation by applying operator norm theory. In practice, the method determines those epistemic parameter values that yield the bounds on the probability of failure, given the epistemic uncertainty. The probability of failure, conditional on those epistemic parameters, is then computed using the recently introduced framework of Directional Importance Sampling. Two case studies involving a modulated Clough-Penzien spectrum are included to illustrate the efficiency and exactness of the proposed approach

    Resilience Assessment under Imprecise Probability

    Get PDF
    Resilience analysis of civil structures and infrastructure systems is a powerful approach to quantifying an object's ability to prepare for, recover from, and adapt to disruptive events. The resilience is typically measured probabilistically by the integration of the time-variant performance function, which is by nature a stochastic process as it is affected by many uncertain factors such as hazard occurrences and posthazard recoveries. Resilience evaluation could be challenging in many cases with imprecise probability information on the time-variant performance function. In this paper, a novel method for the assessment of imprecise resilience is presented, which deals with resilience problems with nonprobabilistic performance function. The proposed method, producing lower and upper bounds for imprecise resilience, has benefited from that for imprecise reliability as documented in the literature, motivated by the similarity between reliability and resilience. Two types of stochastic processes, namely log-Gamma and lognormal processes, are employed to model the performance function, with which the explicit form of resilience is derived. Moreover, for a planning horizon within which the hazards may occur for multiple times, the incompletely informed performance function results in "time-dependent imprecise resilience,"which is dependent on the duration of the service period (e.g., life cycle) and can also be handled by applying the proposed method. Through examining the time-dependent resilience of a strip foundation in a coastal area subjected to groundwater intrusion in a changing climate, the applicability of the proposed resilience bounding method is demonstrated. The impact of imprecise probability information on resilience is quantified through sensitivity analysis
    • …
    corecore