177 research outputs found

    Interpolation and Extrapolation of Toeplitz Matrices via Optimal Mass Transport

    Full text link
    In this work, we propose a novel method for quantifying distances between Toeplitz structured covariance matrices. By exploiting the spectral representation of Toeplitz matrices, the proposed distance measure is defined based on an optimal mass transport problem in the spectral domain. This may then be interpreted in the covariance domain, suggesting a natural way of interpolating and extrapolating Toeplitz matrices, such that the positive semi-definiteness and the Toeplitz structure of these matrices are preserved. The proposed distance measure is also shown to be contractive with respect to both additive and multiplicative noise, and thereby allows for a quantification of the decreased distance between signals when these are corrupted by noise. Finally, we illustrate how this approach can be used for several applications in signal processing. In particular, we consider interpolation and extrapolation of Toeplitz matrices, as well as clustering problems and tracking of slowly varying stochastic processes

    Uncertainty Bounds for Spectral Estimation

    Full text link
    The purpose of this paper is to study metrics suitable for assessing uncertainty of power spectra when these are based on finite second-order statistics. The family of power spectra which is consistent with a given range of values for the estimated statistics represents the uncertainty set about the "true" power spectrum. Our aim is to quantify the size of this uncertainty set using suitable notions of distance, and in particular, to compute the diameter of the set since this represents an upper bound on the distance between any choice of a nominal element in the set and the "true" power spectrum. Since the uncertainty set may contain power spectra with lines and discontinuities, it is natural to quantify distances in the weak topology---the topology defined by continuity of moments. We provide examples of such weakly-continuous metrics and focus on particular metrics for which we can explicitly quantify spectral uncertainty. We then consider certain high resolution techniques which utilize filter-banks for pre-processing, and compute worst-case a priori uncertainty bounds solely on the basis of the filter dynamics. This allows the a priori tuning of the filter-banks for improved resolution over selected frequency bands.Comment: 8 figure

    New Directions for Contact Integrators

    Get PDF
    Contact integrators are a family of geometric numerical schemes which guarantee the conservation of the contact structure. In this work we review the construction of both the variational and Hamiltonian versions of these methods. We illustrate some of the advantages of geometric integration in the dissipative setting by focusing on models inspired by recent studies in celestial mechanics and cosmology.Comment: To appear as Chapter 24 in GSI 2021, Springer LNCS 1282

    Digital Filtering Algorithms for Decorrelation within Large Least Squares Problems

    Get PDF
    The GOCE (Gravity Field and steady-state Ocean Circulation Explorer) mission is dedicated to the determination of the Earth's gravity field. During the mission period of at least one year the GOCE satellite will collect approximately 100 million highly correlated observations. The gravity field will be described in terms of approximately 70,000 spherical harmonic coefficients. This leads to a least squares adjustment, in which the design matrix occupies 51 terabytes while the covariance matrix of the observations requires 72,760 terabytes of memory. The very large design matrix is typically computed in parallel using supercomputers like the JUMP (Juelich Multi Processor) supercomputer in Jülich, Germany. However, such a brute force approach does not work for the covariance matrix. Here, we have to exploit certain features of the observations, e.g. that the observations can be interpreted as a stationary time series. This allows for a very sparse representation of the covariance matrix by digital filters. This thesis is concerned with the use of digital filters for decorrelation within large least squares problems. First, it is analyzed, which conditions the observations must meet, such that digital filters can be used to represent their covariance matrix. After that, different filter implementations are introduced and compared with each other, especially with respect to the calculation time of filtering. This is of special concern, as for many applications the very large design matrix has to be filtered at least once. One special problem arising by the use of digital filters is the so-called warm-up effect. For the first time, methods are developed in this thesis for determining the length of the effect and for avoiding this effect. Next, a new algorithm is developed to deal with the problem of short data gaps within the observation time series. Finally, it is investigated which filter methods are best adopted for the application scenario GOCE, and several numerical simulations are performed.Digitale Filteralgorithmen zur Dekorrelation in großen kleinste-Quadrate Problemen Die GOCE (Gravity Field and steady-state Ocean Circulation Explorer) Mission ist der Bestimmung des Erdschwerefeldes gewidmet. Während der Missionsdauer von mindestens einem Jahr wird der GOCE Satellit circa 100 Millionen hoch korrelierte Beobachtungen sammeln. Das Erdschwerefeld wird durch circa 70.000 sphärisch harmonische Koeffizienten beschrieben. Dies führt zu einem kleinste-Quadrate Ausgleich, wobei die Designmatrix 51 Terabytes benötigt während die Kovarianzmatrix der Beobachtungen 72.760 Terabytes erfordert. Die sehr große Designmatrix wird typischerweise parallel berechnet, wobei Supercomputer wie JUMP (Juelich Multi Processor) in Jülich (Deutschland) zum Einsatz kommen. Ein solcher Ansatz, bei dem das Problem durch geballte Rechenleistung gelöst wird, funktioniert bei der Kovarianzmatrix der Beobachtungen nicht mehr. Hier müssen bestimmte Eigenschaften der Beobachtungen ausgenutzt werden, z.B. dass die Beobachtungen als stationäre Zeitreihe aufgefasst werden können. Dies ermöglicht es die Kovarianzmatrix durch digitale Filter zu repräsentieren. Diese Arbeit beschäftigt sich mit der Nutzung von digitalen Filtern zur Dekorrelation in großen kleinste-Quadrate Problemen. Zuerst wird analysiert, welche Bedingungen die Beobachtungen erfüllen müssen, damit digitale Filter zur Repräsentation ihrer Kovarianzmatrix benutzt werden können. Danach werden verschiedene Filterimplementierungen vorgestellt und miteinander verglichen, wobei spezielles Augenmerk auf die Rechenzeit für das Filtern gelegt wird. Dies ist von besonderer Bedeutung, da in vielen Anwendungen die sehr große Designmatrix mindestens einmal gefiltert werden muss. Ein spezielles Problem, welches beim Benutzen der Filter entsteht, ist der sogenannte Warmlaufzeiteffekt. Zum ersten Mal werden in dieser Arbeit Methoden entwickelt, um die Länge des Effekts zu bestimmen und um den Effekt zu vermeiden. Als Nächstes wird ein neuer Algorithmus zur Lösung des Problems von kurzen Datenlücken in der Beobachtungszeitreihe entwickelt. Schließlich wird untersucht, welche Filtermethoden man am besten für das Anwendungsszenario GOCE verwendet und es werden verschiedene numerische Simulationen durchgeführt

    Reservoir characterization using seismic inversion data

    Get PDF
    Reservoir architecture may be inferred from analogs and geologic concepts, seismic surveys, and well data. Stochastically inverted seismic data are uninformative about meter-scale features, but aid downscaling by constraining coarse-scale interval properties such as total thickness and average porosity. Well data reveal detailed facies and vertical trends (and may indicate lateral trends), but cannot specify intrawell stratal geometry. Consistent geomodels can be generated for flow simulation by systematically considering the precision and density of different data. Because seismic inversion, conceptual stacking, and lateral variability of the facies are uncertain, stochastic ensembles of geomodels are needed to capture variability. In this research, geomodels integrate stochastic seismic inversions. At each trace, constraints represent means and variances for the inexact constraint algorithms, or can be posed as exact constraints. These models also include stratigraphy (a stacking framework from prior geomodels), well data (core and wireline logs to constrain meter-scale structure at the wells), and geostatistics (for correlated variability). These elements are combined in a Bayesian framework. This geomodeling process creates prior models with plausible bedding geometries and facies successions. These prior models of stacking are updated, using well and seismic data to generate the posterior model. Markov Chain Monte Carlo methods sample the posteriors. Plausible subseismic features are introduced into flow models, whilst avoiding overtuning to seismic data or conceptual geologic models. Fully integrated cornerpoint flow models are created, and methods for screening and simulation studies are discussed. The updating constraints on total thickness and average porosity need not be from a seismic survey: any spatially dense estimates of these properties may be used

    Hybrid structural health monitoring using data-driven modal analysis and model-based Bayesian inference.

    Get PDF
    Civil infrastructures that are valuable assets for the public and owners must be adequately and periodically maintained to guarantee safety, continuous service, and avoid economic losses. Vibration-based structural health monitoring (VBSHM) has been a significant tool to assess the structural performance of civil infrastructures over the last decades. Challenges in VBSHM exist in two aspects: operational modal analysis (OMA) and Finite element model updating (FEMU). The former aims to extract natural frequency, damping ratio, and mode shapes using vibrational data under normal operation; the latter focuses on minimizing the discrepancies between measurements and model prediction. The main impediments to real-world application of VBSHM include 1) uncertainties are inevitably involved due to measurement noise and modeling error; 2) computational burden in analyzing massive data and high-fidelity model; 3) updating structural coupled parameters, e.g., mass and stiffness. Bayesian model updating approach (BMUA) is an advanced FEMU technique to update structural parameters using modal data and account for underlying uncertainties. However, traditional BMUA generally assumes mass is precisely known and only updating stiffness to circumvent the coupling effect of mass and stiffness. Simultaneously updating mass and stiffness is necessary to fully understand the structural integrity, especially when the mass has a relatively large variation. To tackle these challenges, this dissertation proposed a hybrid framework using data-driven and model-based approaches in two sequential phases: automated OMA and a BMUA with added mass/stiffness. Automated stochastic subspace identification (SSI) and Bayesian modal identification are firstly developed to acquire modal properties. Following by a novel BMUA, new eigen-equations based on two sets of modal data from the original and modified system with added mass or stiffness are derived to address the coupling effect of structural parameters, e.g., mass and stiffness. To avoid multi-dimensional integrals, an asymptotic optimization method and Differential Evolutionary Adaptive Metropolis (DREAM) sampling algorithm are employed for Bayesian inference. To alleviate computational burden, variance-based global sensitivity analysis to reduce model dimensionality and Kriging model to substitute time-consuming FEM are integrated into BMUA. The proposed VBSHM are verified and illustrated using numerical, laboratory and field test data, achieving following goals: 1) properly treating parameter uncertainties; 2) substantially reducing the computational cost; 3) simultaneously updating structural parameters with addressing the coupling effect; 4) performing the probabilistic damage identification at an accurate level

    FLUTTER SUPPRESSION BY ACTIVE CONTROLLER OF A TWO-DIMENSIONAL WING WITH A FLAP

    Get PDF
    Flutter is a divergent oscillation of an aeroelastic structure, and one of a family of aeroelastic instability phenomena, that results from the interaction of elastic and inertial forces of the structure with the surrounding aerodynamic forces. Airfoil Flutter is important due to its catastrophic effect on the durability and operational safety of the structure. Traditionally, flutter is prevented within an aircraft\u27s flight envelope using passive approaches such as optimizing stiffness distribution, mass balancing, or modifying geometry during the design phase. Although these methods are effective but they led to heavier airfoil designs. On the other hand, active control methods allow for less weight and higher manoeuvring capabilities. The main objective of this study is to investigate the potential effectiveness of using Model Predictive Control MPC as an active control strategy to suppress flutter. Lagrange’s energy method and Theodore’s unsteady aerodynamic theory were employed to derive the equations of motion of a typical 2D wing section with a flap. Using MATLAB®, the airspeed at which the flutter occurs for a specific wing’s parameters were found to be 23.96 m/s, at a frequency of 6.12 Hz. A Linear Quadratic Gaussian compensator LQG was designed and simulated. MATLAB® was also used to design and simulate a discrete MPC using Laguerre orthonormal functions. The simulated results for states regulation and reference tracking tasks in the flutter airspeed region from both controllers were compared and discussed in terms of quantitative performance measures and performance indices. The results showed that both LQG and MPC are powerful in suppressing the flutter in addition to their effectiveness in tracking a reference input rapidly and accurately with zero steady-state error. The superiority for the constrained MPC is manifested by results comparison. MPC were able to save more than 40% of the needed settling time for states regulation task. Furthermore, it performed the job with much less control energy indicated by the ISE and ISU indices. On top of that, the key advantage of MPC, which is the ability to perform real-time optimization with hard constraints on input variables, was confirmed

    In-Orbit Performance of the GRACE Accelerometers and Microwave Ranging Instrument

    Get PDF
    The Gravity Recovery and Climate Experiment (GRACE) satellite mission has provided global long-term observations of mass transport in the Earth system with applications in numerous geophysical fields. In this paper, we targeted the in-orbit performance of the GRACE key instruments, the ACCelerometers (ACC) and the MicroWave ranging Instrument (MWI). For the ACC data, we followed a transplant approach analyzing the residual accelerations from transplanted accelerations of one of the two satellites to the other. For the MWI data, we analyzed the post-fit residuals of the monthly GFZ GRACE RL06 solutions with a focus on stationarity. Based on the analyses for the two test years 2007 and 2014, we derived stochastic models for the two instruments and a combined ACC+MWI stochastic model. While all three ACC axes showed worse performance than their preflight specifications, in 2007, a better ACC performance than in 2014 was observed by a factor of 3.6 due to switched-off satellite thermal control. The GRACE MWI noise showed white noise behavior for frequencies above 10 mHz around the level of (Formula presented.). In the combined ACC+MWI noise model, the ACC part dominated the frequencies below 10 mHz, while the MWI part dominated above 10 mHz. We applied the combined ACC+MWI stochastic models for 2007 and 2014 to the monthly GFZ GRACE RL06 processing. This improved the formal errors and resulted in a comparable noise level of the estimated gravity field parameters. Furthermore, the need for co-estimating empirical parameters was reduced
    corecore