1,581 research outputs found

    A reduced basis ensemble Kalman method

    Get PDF
    In the process of reproducing the state dynamics of parameter dependent distributed systems, data from physical measurements can be incorporated into the mathematical model to reduce the parameter uncertainty and, consequently, improve the state prediction. Such a data assimilation process must deal with the data and model misfit arising from experimental noise as well as model inaccuracies and uncertainties. In this work, we focus on the ensemble Kalman method (EnKM), a particle-based iterative regularization method designed for a posteriori analysis of time series. The method is gradient free and, like the ensemble Kalman filter (EnKF), relies on a sample of parameters or particle ensemble to identify the state that better reproduces the physical observations, while preserving the physics of the system as described by the best knowledge model. We consider systems described by parameterized parabolic partial differential equations and employ model order reduction techniques to generate surrogate models of different accuracy with uncertain parameters. Their use in combination with the EnKM involves the introduction of the model bias which constitutes a new source of systematic error. To mitigate its impact, an algorithm adjustment is proposed accounting for a prior estimation of the bias in the data. The resulting RB-EnKM is tested in different conditions, including different ensemble sizes and increasing levels of experimental noise. The results are compared to those obtained with the standard EnKF and with the unadjusted algorithm.</p

    A reduced basis ensemble Kalman method

    Get PDF
    In the process of reproducing the state dynamics of parameter dependent distributed systems, data from physical measurements can be incorporated into the mathematical model to reduce the parameter uncertainty and, consequently, improve the state prediction. Such a data assimilation process must deal with the data and model misfit arising from experimental noise as well as model inaccuracies and uncertainties. In this work, we focus on the ensemble Kalman method (EnKM), a particle-based iterative regularization method designed for a posteriori analysis of time series. The method is gradient free and, like the ensemble Kalman filter (EnKF), relies on a sample of parameters or particle ensemble to identify the state that better reproduces the physical observations, while preserving the physics of the system as described by the best knowledge model. We consider systems described by parameterized parabolic partial differential equations and employ model order reduction techniques to generate surrogate models of different accuracy with uncertain parameters. Their use in combination with the EnKM involves the introduction of the model bias which constitutes a new source of systematic error. To mitigate its impact, an algorithm adjustment is proposed accounting for a prior estimation of the bias in the data. The resulting RB-EnKM is tested in different conditions, including different ensemble sizes and increasing levels of experimental noise. The results are compared to those obtained with the standard EnKF and with the unadjusted algorithm.</p

    Kontextsensitive Modellhierarchien fĂŒr Quantifizierung der höherdimensionalen Unsicherheit

    Get PDF
    We formulate four novel context-aware algorithms based on model hierarchies aimed to enable an efficient quantification of uncertainty in complex, computationally expensive problems, such as fluid-structure interaction and plasma microinstability simulations. Our results show that our algorithms are more efficient than standard approaches and that they are able to cope with the challenges of quantifying uncertainty in higher-dimensional, complex problems.Wir formulieren vier kontextsensitive Algorithmen auf der Grundlage von Modellhierarchien um eine effiziente Quantifizierung der Unsicherheit bei komplexen, rechenintensiven Problemen zu ermöglichen, wie Fluid-Struktur-Wechselwirkungs- und Plasma-MikroinstabilitÀtssimulationen. Unsere Ergebnisse zeigen, dass unsere Algorithmen effizienter als StandardansÀtze sind und die Herausforderungen der Quantifizierung der Unsicherheit in höherdimensionalen, komplexen Problemen bewÀltigen können

    Space Point Calibration of the ALICE TPC with Track Residuals

    Get PDF
    In the upcoming LHC Run 3 the upgraded Time Projection Chamber (TPC) of the ALICE experiment will record Pb--Pb collisions in a continuous readout mode at an interaction rate up to 50 kHz. These conditions will lead to the accumulation of space charge in the detector volume which in turn induces distortions of the electron drift lines of several centimeters that fluctuate in time. This work describes the correction of these distortions via a calibration procedure that uses the information of the Inner Tracking System (ITS), which is located inside, and the Transition Radiation Detector (TRD) and the Time-Of-Flight system (TOF), located around the TPC, respectively. The required online tracking algorithm for the TRD, which is based on a Kalman filter, is the main result of this work. The procedure matches extrapolated ITS-TPC tracks to TRD space points utilizing GPUs. The new online tracking algorithm has a performance comparable to the one of the offline tracking algorithm used in the Run 1 and 2 for tracks with transverse momenta above 1.5 GeV/c, while it fulfills the computing speed requirements for Run 3. The second part of this work describes the extraction of time-averaged TPC cluster residuals with respect to interpolated ITS-TRD-TOF tracks, in order to create a map of space-charge distortions. Regular updates of the correction map compensate for changes in the TPC conditions. The map is applied in the final reconstruction of the data

    State Estimation with Model Reduction and Shape Variability. Application to biomedical problems

    Get PDF
    We develop a mathematical and numerical framework to solve state estimation problems for applications that present variations in the shape of the spatial domain. This situation arises typically in a biomedical context where inverse problems are posed on certain organs or portions of the body which inevitably involve morphological variations. If one wants to provide fast reconstruction methods, the algorithms must take into account the geometric variability. We develop and analyze a method which allows to take this variability into account without needing any a priori knowledge on a parametrization of the geometrical variations. For this, we rely on morphometric techniques involving Multidimensional Scaling, and couple them with reconstruction algorithms that make use of reduced model spaces pre-computed on a database of geometries. We prove the potential of the method on a synthetic test problem inspired from the reconstruction of blood flows and quantities of medical interest with Doppler ultrasound imaging

    Performance of the LHCb vertex locator

    Get PDF
    The Vertex Locator (VELO) is a silicon microstrip detector that surrounds the proton-proton interaction region in the LHCb experiment. The performance of the detector during the first years of its physics operation is reviewed. The system is operated in vacuum, uses a bi-phase CO2 cooling system, and the sensors are moved to 7 mm from the LHC beam for physics data taking. The performance and stability of these characteristic features of the detector are described, and details of the material budget are given. The calibration of the timing and the data processing algorithms that are implemented in FPGAs are described. The system performance is fully characterised. The sensors have a signal to noise ratio of approximately 20 and a best hit resolution of 4 ÎŒm is achieved at the optimal track angle. The typical detector occupancy for minimum bias events in standard operating conditions in 2011 is around 0.5%, and the detector has less than 1% of faulty strips. The proximity of the detector to the beam means that the inner regions of the n+-on-n sensors have undergone space-charge sign inversion due to radiation damage. The VELO performance parameters that drive the experiment's physics sensitivity are also given. The track finding efficiency of the VELO is typically above 98% and the modules have been aligned to a precision of 1 ÎŒm for translations in the plane transverse to the beam. A primary vertex resolution of 13 ÎŒm in the transverse plane and 71 ÎŒm along the beam axis is achieved for vertices with 25 tracks. An impact parameter resolution of less than 35 ÎŒm is achieved for particles with transverse momentum greater than 1 GeV/c

    Structure-Preserving Hyper-Reduction and Temporal Localization for Reduced Order Models of Incompressible Flows

    Full text link
    A novel hyper-reduction method is proposed that conserves kinetic energy and momentum for reduced order models of the incompressible Navier-Stokes equations. The main advantage of conservation of kinetic energy is that it endows the hyper-reduced order model (hROM) with a nonlinear stability property. The new method poses the discrete empirical interpolation method (DEIM) as a minimization problem and subsequently imposes constraints to conserve kinetic energy. Two methods are proposed to improve the robustness of the new method against error accumulation: oversampling and Mahalanobis regularization. Mahalanobis regularization has the benefit of not requiring additional measurement points. Furthermore, a novel method is proposed to perform structure-preserving temporal localization with the principle interval decomposition: new interface conditions are derived such that energy and momentum are conserved for a full time-integration instead of only during separate intervals. The performance of the new structure-preserving hyper-reduction methods and the structure-preserving temporal localization method is analysed using two convection-dominated test cases; a shear-layer roll-up and two-dimensional homogeneous isotropic turbulence. It is found that both Mahalanobis regularization and oversampling allow hyper-reduction of these test cases. Moreover, the Mahalanobis regularization provides comparable robustness while being more efficient than oversampling

    kPCA-Based Parametric Solutions Within the PGD Framework

    Get PDF
    Parametric solutions make possible fast and reliable real-time simulations which, in turn allow real time optimization, simulation-based control and uncertainty propagation. This opens unprecedented possibilities for robust and efficient design and real-time decision making. The construction of such parametric solutions was addressed in our former works in the context of models whose parameters were easily identified and known in advance. In this work we address more complex scenarios in which the parameters do not appear explicitly in the model—complex microstructures, for instance. In these circumstances the parametric model solution requires combining a technique to find the relevant model parameters and a solution procedure able to cope with high-dimensional models, avoiding the well-known curse of dimensionality. In this work, kPCA (kernel Principal Component Analysis) is used for extracting the hidden model parameters, whereas the PGD (Proper Generalized Decomposition) is used for calculating the resulting parametric solution
    • 

    corecore