182 research outputs found

    Adaptive construction of surrogate functions for various computational mechanics models

    Get PDF
    In most science and engineering fields, numerical simulation models are often used to replicate physical systems. An attempt to imitate the true behavior of complex systems results in computationally expensive simulation models. The models are more often than not associated with a number of parameters that may be uncertain or variable. Propagation of variability from the input parameters in a simulation model to the output quantities is important for better understanding the system behavior. Variability propagation of complex systems requires repeated runs of costly simulation models with different inputs, which can be prohibitively expensive. Thus for efficient propagation, the total number of model evaluations needs to be as few as possible. An efficient way to account for the variations in the output of interest with respect to these parameters in such situations is to develop black-box surrogates. It involves replacing the expensive high-fidelity simulation model by a much cheaper model (surrogate) using a limited number of the high-fidelity simulations on a set of points called the design of experiments (DoE). The obvious challenge in surrogate modeling is to efficiently deal with simulation models that are expensive and contains a large number of uncertain parameters. Also, replication of different types of physical systems results in simulation models that vary based on the type of output (discrete or continuous models), extent of model output information (knowledge of output or output gradients or both), and whether the model is stochastic or deterministic in nature. All these variations in information from one model to the other demand development of different surrogate modeling algorithms for maximum efficiency. In this dissertation, simulation models related to application problems in the field of solid mechanics are considered that belong to each one of the above-mentioned classes of models. Different surrogate modeling strategies are proposed to deal with these models and their performance is demonstrated and compared with existing surrogate modeling algorithms. The developed algorithms, because of their non-intrusive nature, can be easily extended to simulation models of similar classes, pertaining to any other field of application

    A stochastic method for representation, modelling and fusion of excavated material in mining

    Get PDF
    The ability to safely and economically extract raw materials such as iron ore from a greater number of remote, isolated and possibly dangerous locations will become more pressing over the coming decades as easily accessible deposits become depleted. An autonomous mining system has the potential to make the mining process more efficient, predictable and safe under these changing conditions. One of the key parts of the mining process is the estimation and tracking of bulk material through the mining production chain. Current state-of-the-art tracking and estimation systems use a deterministic representation for bulk material. This is problematic for wide-scale automation of mine processes as there is no measurement of the uncertainty in the estimates provided. A probabilistic representation is critical for autonomous systems to correctly interpret and fuse the available data in order to make the most informed decision given the available information without human intervention. This thesis investigates whether bulk material properties can be represented probabilistically through a mining production chain to provide statistically consistent estimates of the material at each stage of the production chain. Experiments and methods within this thesis focus on the load-haul-dump cycle. The development of a representation of bulk material using lumped masses is presented. A method for tracking and estimation of these lumped masses within the mining production chain using an 'Augmented State Kalman Filter' (ASKF) is developed. The method ensures that the fusion of new information at different stages will provide statistically consistent estimates of the lumped mass. There is a particular focus on the feasibility and practicality of implementing a solution on a production mine site given the current sensing technology available and how it can be adapted for use within the developed estimation system (with particular focus on remote sensing and volume estimation)

    Kernel-based fault diagnosis of inertial sensors using analytical redundancy

    Get PDF
    Kernel methods are able to exploit high-dimensional spaces for representational advantage, while only operating implicitly in such spaces, thus incurring none of the computational cost of doing so. They appear to have the potential to advance the state of the art in control and signal processing applications and are increasingly seeing adoption across these domains. Applications of kernel methods to fault detection and isolation (FDI) have been reported, but few in aerospace research, though they offer a promising way to perform or enhance fault detection. It is mostly in process monitoring, in the chemical processing industry for example, that these techniques have found broader application. This research work explores the use of kernel-based solutions in model-based fault diagnosis for aerospace systems. Specifically, it investigates the application of these techniques to the detection and isolation of IMU/INS sensor faults – a canonical open problem in the aerospace field. Kernel PCA, a kernelised non-linear extension of the well-known principal component analysis (PCA) algorithm, is implemented to tackle IMU fault monitoring. An isolation scheme is extrapolated based on the strong duality known to exist between probably the most widely practiced method of FDI in the aerospace domain – the parity space technique – and linear principal component analysis. The algorithm, termed partial kernel PCA, benefits from the isolation properties of the parity space method as well as the non-linear approximation ability of kernel PCA. Further, a number of unscented non-linear filters for FDI are implemented, equipped with data-driven transition models based on Gaussian processes - a non-parametric Bayesian kernel method. A distributed estimation architecture is proposed, which besides fault diagnosis can contemporaneously perform sensor fusion. It also allows for decoupling faulty sensors from the navigation solution

    Spatial statistics and analysis of earth's ionosphere

    Full text link
    Thesis (Ph.D.)--Boston UniversityThe ionosphere, a layer of Earths upper atmosphere characterized by energetic charged particles, serves as a natural plasma laboratory and supplies proxy diagnostics of space weather drivers in the magnetosphere and the solar wind. The ionosphere is a highly dynamic medium, and the spatial structure of observed features (such as auroral light emissions, charge density, temperature, etc.) is rich with information when analyzed in the context of fluid, electromagnetic, and chemical models. Obtaining measurements with higher spatial and temporal resolution is clearly advantageous. For instance, measurements obtained with a new electronically-steerable incoherent scatter radar (ISR) present a unique space-time perspective compared to those of a dish-based ISR. However, there are unique ambiguities for this modality which must be carefully considered. The ISR target is stochastic, and the fidelity of fitted parameters (ionospheric densities and temperatures) requires integrated sampling, creating a tradeoff between measurement uncertainty and spatio-temporal resolution. Spatial statistics formalizes the relationship between spatially dispersed observations and the underlying process(es) they represent. A spatial process is regarded as a random field with its distribution structured (e.g., through a correlation function) such that data, sampled over a spatial domain, support inference or prediction of the process. Quantification of uncertainty, an important component of scientific data analysis, is a core value of spatial statistics. This research applies the formalism of spatial statistics to the analysis of Earth's ionosphere using remote sensing diagnostics. In the first part, we consider the problem of volumetric imaging using phased-array ISR based on optimal spatial prediction ("kriging"). In the second part, we develop a technique for reconstructing two-dimensional ion flow fields from line-of-sight projections using Tikhonov regularization. In the third part, we adapt our spatial statistical approach to global ionospheric imaging using total electron content (TEC) measurements derived from navigation satellite signals
    • …
    corecore