104 research outputs found

    Quasi-Newton inversion of seismic first arrivals using source finite bandwidth assumption: Application to subsurface characterization of landslides

    No full text
    International audienceCharacterizing the internal structure of landslides is of first importance to assess the hazard. Many geophysical techniques have been used in the recent years to image these structures, and among them is seismic tomography. The objective of this work is to present a high resolution seismic inversion algorithm of first arrival times that minimizes the use of subjective regularization operators. A Quasi-Newton P-wave tomography inversion algorithm has been developed. It is based on a finite frequency assumption for highly heterogeneous media which considers an objective inversion regularization (based on the wave propagation principle) and uses the entire source frequency spectrum to improve the tomography resolution. The Fresnel wavepaths calculated for different source frequencies are used to retropropagate the traveltime residuals, assuming that in highly heterogeneous media, the first arrivals are only affected by velocity anomalies present in the first Fresnel zone. The performance of the algorithm is first evaluated on a synthetic dataset, and further applied on a real dataset acquired at the Super-Sauze landslide which is characterized by a complex bedrock geometry, a layering of different materials and important changes in soil porosity (e.g. surface fissures). The seismic P-wave velocity and the wave attenuation are calculated, and the two tomographies are compared to previous studies on the site

    Correlated uncertainty arithmetic with application to fusion neutronics

    Get PDF
    his thesis advances the idea of automatic and rigorous uncertainty propagation for computational science. The aim is to replace the deterministic arithmetic and logical operations composing a function or a computer program with their uncertain equivalents. In this thesis, uncertain computer variables are labelled uncertain numbers, which may be probability distributions, intervals, probability boxes, and possibility distributions. The individual models of uncertainty are surveyed in the context of imprecise probability theory, and their individual arithmetic described and developed, with new results presented in each arithmetic. The presented arithmetic framework allows random variables to be imprecisely characterised or partially defined. It is a common situation that input random variables are unknown or that only certain characteristics of the inputs are known. How uncertain numbers can be rigorously represented by a finite numerical discretisation is described. Further, it is shown how arithmetic operations are computed by numerical convolution, accounting for both the error from the input's discretisation and from the numerical integration, yielding guaranteed bounds on computed uncertain numbers. One of the central topics of this thesis is stochastic dependency. Considering complex dependencies amongst uncertain numbers is necessary, as it plays a key role in operations. An arithmetic operation between two uncertain numbers is a function not only of the input numbers, but also how they are correlated. This is often more important than the marginal information. In the presented arithmetic, dependencies between uncertain numbers may also be partially defined or missing entirely. A major proposition of this thesis are methods to propagate dependence information through functions alongside marginal information. The long-term goal is to solve probabilistic problems with partial knowledge about marginal distributions and dependencies using algorithms which were written deterministically. The developed arithmetic frameworks can be used individually, or may be combined into a larger uncertainty computing framework. We present an application of the developed method to a radiation transport algorithm for nuclear fusion neutronics problems

    Identification of low order models for large scale processes

    Get PDF
    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand for a flexible operation of a complex process, a pressing need to improve the product quality, an increasing energy cost and tightening environmental regulations make it rewarding to automate a large scale manufacturing process. Mathematical tools used for process modeling, simulation and control are useful to meet these challenges. Towards this purpose, development of process models, either from the first principles (conservation laws) i.e. the rigorous models or the input-output data based models constitute an important step. Both types of models have their own advantages and pitfalls. Rigorous process models can approximate the process behavior reasonably well. The ability to extrapolate the rigorous process models and the physical interpretation of their states make them more attractive for the automation purpose over the input-output data based identified models. Therefore, the use of rigorous process models and rigorous model based predictive control (R-MPC) for the purpose of online control and optimization of a process is very promising. However, due to several limitations e.g. slow computation speed and the high modeling efforts, it becomes difficult to employ the rigorous models in practise. This thesis work aims to develop a methodology which will result in smaller, less complex and computationally efficient process models from the rigorous process models which can be used in real time for online control and dynamic optimization of the industrial processes. Such methodology is commonly referred to as a methodology of Model (order) Reduction. Model order reduction aims at removing the model redundancy from the rigorous process models. The model order reduction methods that are investigated in this thesis, are applied to two benchmark examples, an industrial glass manufacturing process and a tubular reactor. The complex, nonlinear, multi-phase fluid flow that is observed in a glass manufacturing process offers multiple challenges to any model reduction technique. Often, the rigorous first principle models of these benchmark examples are implemented in a discretized form of partial differential equations and their solutions are computed using the Computational Fluid Dynamics (CFD) numerical tools. Although these models are reliable representations of the underlying process, computation of their dynamic solutions require a significant computation efforts in the form of CPU power and simulation time. The glass manufacturing process involves a large furnace whose walls wear out due to the high process temperature and aggressive nature of the molten glass. It is shown here that the wearing of a glass furnace walls result in change of flow patterns of the molten glass inside the furnace. Therefore it is also desired from the reduced order model to approximate the process behavior under the influence of changes in the process parameters. In this thesis the problem of change in flow patterns as result of changes in the geometric parameter is treated as a bifurcation phenomenon. Such bifurcations exhibited by the full order model are detected using a novel framework of reduced order models and hybrid detection mechanisms. The reduced order models are obtained using the methods explained in the subsequent paragraphs. The model reduction techniques investigated in this thesis are based on the concept of Proper Orthogonal Decompositions (POD) of the process measurements or the simulation data. The POD method of model reduction involves spectral decomposition of system solutions and results into arranging the spatio-temporal data in an order of increasing importance. The spectral decomposition results into spatial and temporal patterns. Spatial patterns are often known as POD basis while the temporal patterns are known as the POD modal coefficients. Dominant spatio-temporal patterns are then chosen to construct the most relevant lower dimensional subspace. The subsequent step involves a Galerkin projection of the governing equations of a full order first principle model on the resulting lower dimensional subspace. This thesis can be viewed as a contribution towards developing the databased nonlinear model reduction technique for large scale processes. The major contribution of this thesis is presented in the form of two novel identification based approaches to model order reduction. The methods proposed here are based on the state information of a full order model and result into linear and nonlinear reduced order models. Similar to the POD method explained in the previous paragraph, the first step of the proposed identification based methods involve spectral decomposition. The second step is different and does not involve the Galerkin projection of the equation residuals. Instead, the second step involves identification of reduced order models to approximate the evolution of POD modal coefficients. Towards this purpose, two different methods are presented. The first method involves identification of locally valid linear models to represent the dynamic behavior of the modal coefficients. Global behavior is then represented by ‘blending’ the local models. The second method involves direct identification of the nonlinear models to represent dynamic evolution of the model coefficients. In the first proposed model reduction method, the POD modal coefficients, are treated as outputs of an unknown reduced order model that is to be identified. Using the tools from the field of system identification, a blackbox reduced order model is then identified as a linear map between the plant inputs and the modal coefficients. Using this method, multiple local reduced LTI models corresponding to various working points of the process are identified. The working points cover the nonlinear operation range of the process which describes the global process behavior. These reduced LTI models are then blended into a single Reduced Order-Linear Parameter Varying (ROLPV) model. The weighted blending is based on nonlinear splines whose coefficients are estimated using the state information of the full order model. Along with the process nonlinearity, the nonlinearity arising due to the wear of the furnace wall is also approximated using the RO-LPV modeling framework. The second model reduction method that is proposed in this thesis allows approximation of a full order nonlinear model by various (linear or nonlinear) model structures. It is observed in this thesis, that, for certain class of full order models, the POD modal coefficients can be viewed as the states of the reduced order model. This knowledge is further used to approximate the dynamic behavior of the POD modal coefficients. In particular, reduced order nonlinear models in the form of tensorial (multi-variable polynomial) systems are identified. In the view of these nonlinear tensorial models, the stability and dissipativity of these models is investigated. During the identification of the reduced order models, the physical interpretation of the states of the full order rigorous model is preserved. Due to the smaller dimension and the reduced complexity, the reduced order models are computationally very efficient. The smaller computation time allows them to be used for online control and optimization of the process plant. The possibility of inferring reduced order models from the state information of a full order model alone i.e. the possibility to infer the reduced order models in the absence of access to the governing equations of a full order model (as observed for many commercial software packages) make the methods presented here attractive. The resulting reduced order models need further system theoretic analysis in order to estimate the model quality with respect to their usage in an online controller setting

    Decentralized Riemannian Particle Filtering with Applications to Multi-Agent Localization

    Get PDF
    The primary focus of this research is to develop consistent nonlinear decentralized particle filtering approaches to the problem of multiple agent localization. A key aspect in our development is the use of Riemannian geometry to exploit the inherently non-Euclidean characteristics that are typical when considering multiple agent localization scenarios. A decentralized formulation is considered due to the practical advantages it provides over centralized fusion architectures. Inspiration is taken from the relatively new field of information geometry and the more established research field of computer vision. Differential geometric tools such as manifolds, geodesics, tangent spaces, exponential, and logarithmic mappings are used extensively to describe probabilistic quantities. Numerous probabilistic parameterizations were identified, settling on the efficient square-root probability density function parameterization. The square-root parameterization has the benefit of allowing filter calculations to be carried out on the well studied Riemannian unit hypersphere. A key advantage for selecting the unit hypersphere is that it permits closed-form calculations, a characteristic that is not shared by current solution approaches. Through the use of the Riemannian geometry of the unit hypersphere, we are able to demonstrate the ability to produce estimates that are not overly optimistic. Results are presented that clearly show the ability of the proposed approaches to outperform current state-of-the-art decentralized particle filtering methods. In particular, results are presented that emphasize the achievable improvement in estimation error, estimator consistency, and required computational burden

    Analyzing Asymetric Dependence in Exchange Rates using Copula

    Get PDF
    In this paper I aimed to analyze the use of copulas in financial application, namely to investigate the assumption of asymmetric dependence and to compute some measures of risk. For this purpose I used a portfolio consisting in four currencies from Central and Eastern Europe. Due to some stylized facts observed in exchange rate series I filter the data with an ARMA GJR model. The marginal distributions of filtered residuals are fitted with a semi-parametric CDF, using a Gaussian kernel for the interior of distribution and Generalized Pareto Distribution for tails. To obtain a better view of the dependence among the four currencies I proposed a decomposition of large portfolio in other three bivariate sub-portfolios. For each of them I compute Value-at-Risk and Conditional Value-at-Risk and then backtest the results.Value-at-Risk, copula, Generalized Pareto Distribution

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Enhanced Power System Operational Performance with Anticipatory Control under Increased Penetration of Wind Energy

    Get PDF
    abstract: As the world embraces a sustainable energy future, alternative energy resources, such as wind power, are increasingly being seen as an integral part of the future electric energy grid. Ultimately, integrating such a dynamic and variable mix of generation requires a better understanding of renewable generation output, in addition to power grid systems that improve power system operational performance in the presence of anticipated events such as wind power ramps. Because of the stochastic, uncontrollable nature of renewable resources, a thorough and accurate characterization of wind activity is necessary to maintain grid stability and reliability. Wind power ramps from an existing wind farm are studied to characterize persistence forecasting errors using extreme value analysis techniques. In addition, a novel metric that quantifies the amount of non-stationarity in time series wind power data was proposed and used in a real-time algorithm to provide a rigorous method that adaptively determines training data for forecasts. Lastly, large swings in generation or load can cause system frequency and tie-line flows to deviate from nominal, so an anticipatory MPC-based secondary control scheme was designed and integrated into an automatic generation control loop to improve the ability of an interconnection to respond to anticipated large events and fluctuations in the power system.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    • 

    corecore