957,091 research outputs found

    Data-driven and Model-based Verification: a Bayesian Identification Approach

    Full text link
    This work develops a measurement-driven and model-based formal verification approach, applicable to systems with partly unknown dynamics. We provide a principled method, grounded on reachability analysis and on Bayesian inference, to compute the confidence that a physical system driven by external inputs and accessed under noisy measurements, verifies a temporal logic property. A case study is discussed, where we investigate the bounded- and unbounded-time safety of a partly unknown linear time invariant system

    From Nonlinear Identification to Linear Parameter Varying Models: Benchmark Examples

    Full text link
    Linear parameter-varying (LPV) models form a powerful model class to analyze and control a (nonlinear) system of interest. Identifying a LPV model of a nonlinear system can be challenging due to the difficulty of selecting the scheduling variable(s) a priori, which is quite challenging in case a first principles based understanding of the system is unavailable. This paper presents a systematic LPV embedding approach starting from nonlinear fractional representation models. A nonlinear system is identified first using a nonlinear block-oriented linear fractional representation (LFR) model. This nonlinear LFR model class is embedded into the LPV model class by factorization of the static nonlinear block present in the model. As a result of the factorization a LPV-LFR or a LPV state-space model with an affine dependency results. This approach facilitates the selection of the scheduling variable from a data-driven perspective. Furthermore the estimation is not affected by measurement noise on the scheduling variables, which is often left untreated by LPV model identification methods. The proposed approach is illustrated on two well-established nonlinear modeling benchmark examples

    Boosting Bayesian Parameter Inference of Nonlinear Stochastic Differential Equation Models by Hamiltonian Scale Separation

    Full text link
    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model, for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact and very efficient approach for generating posterior parameter distributions, for stochastic differential equation models calibrated to measured time-series. The algorithm is inspired by re-interpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for 1D problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.Comment: 15 pages, 8 figure

    Data-Driven Adaptive Reynolds-Averaged Navier-Stokes \u3cem\u3ek - ω\u3c/em\u3e Models for Turbulent Flow-Field Simulations

    Get PDF
    The data-driven adaptive algorithms are explored as a means of increasing the accuracy of Reynolds-averaged turbulence models. This dissertation presents two new data-driven adaptive computational models for simulating turbulent flow, where partial-but-incomplete measurement data is available. These models automatically adjust (i.e., adapts) the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k-ω turbulence equations to improve agreement between the simulated flow and a set of prescribed measurement data. The first approach is the data-driven adaptive RANS k-ω (D-DARK) model. It is validated with three canonical flow geometries: pipe flow, the backward-facing step, and flow around an airfoil. For all 3 test cases, the D-DARK model improves agreement with experimental data in comparison to the results from a non-adaptive RANS k-ω model that uses standard values of the closure coefficients. The second approach is the Retrospective Cost Adaptation (RCA) k-ω model. The key enabling technology is that of retrospective cost adaptation, which was developed for real-time adaptive control technology, but is used in this work for data-driven model adaptation. The algorithm conducts an optimization, which seeks to minimize the surrogate performance, and by extension the real flow-field error. The advantage of the RCA approach over the D-DARK approach is that it is capable of adapting to unsteady measurements. The RCA-RANS k-ω model is verified with a statistically steady test case (pipe flow) as well as two unsteady test cases: vortex shedding from a surface-mounted cube and flow around a square cylinder. The RCA-RANS k-ω model effectively adapts to both averaged steady and unsteady measurement data

    Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach

    Full text link
    Historically, software production methods and tools have a unique goal: to produce high quality software. Since the goal of Model-Driven Development (MDD) methods is no different, MDD methods have emerged to take advantage of the benefits of using conceptual models to produce high quality software. In such MDD contexts, conceptual models are used as input to automatically generate final applications. Thus, we advocate that there is a relation between the quality of the final software product and the quality of the models used to generate it. The quality of conceptual models can be influenced by many factors. In this thesis, we focus on the accuracy of the techniques used to predict the characteristics of the development process and the generated products. In terms of the prediction techniques for software development processes, it is widely accepted that knowing the functional size of applications in order to successfully apply effort models and budget models is essential. In order to evaluate the quality of generated applications, defect detection is considered to be the most suitable technique. The research goal of this thesis is to provide an accurate measurement procedure based on COSMIC for the automatic sizing of object-oriented OO-Method MDD applications. To achieve this research goal, it is necessary to accurately measure the conceptual models used in the generation of object-oriented applications. It is also very important for these models not to have defects so that the applications to be measured are correctly represented. In this thesis, we present the OOmCFP (OO-Method COSMIC Function Points) measurement procedure. This procedure makes a twofold contribution: the accurate measurement of objectoriented applications generated in MDD environments from the conceptual models involved, and the verification of conceptual models to allow the complete generation of correct final applications from the conceptual models involved. The OOmCFP procedure has been systematically designed, applied, and automated. This measurement procedure has been validated to conform to the ISO 14143 standard, the metrology concepts defined in the ISO VIM, and the accuracy of the measurements obtained according to ISO 5725. This procedure has also been validated by performing empirical studies. The results of the empirical studies demonstrate that OOmCFP can obtain accurate measures of the functional size of applications generated in MDD environments from the corresponding conceptual models.Marín Campusano, BM. (2011). Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11237Palanci

    On the calculation of time alignment errors in data management platforms for distribution grid data

    Get PDF
    The operation and planning of distribution grids require the joint processing of measurements from different grid locations. Since measurement devices in low- and medium-voltage grids lack precise clock synchronization, it is important for data management platforms of distribution system operators to be able to account for the impact of nonideal clocks on measurement data. This paper formally introduces a metric termed Additive Alignment Error to capture the impact of misaligned averaging intervals of electrical measurements. A trace-driven approach for retrieval of this metric would be computationally costly for measurement devices, and therefore, it requires an online estimation procedure in the data collection platform. To overcome the need of transmission of high-resolution measurement data, this paper proposes and assesses an extension of a Markov-modulated process to model electrical traces, from which a closed-form matrix analytic formula for the Additive Alignment Error is derived. A trace-driven assessment confirms the accuracy of the model-based approach. In addition, the paper describes practical settings where the model can be utilized in data management platforms with significant reductions in computational demands on measurement devices

    Batch Nonlinear Continuous-Time Trajectory Estimation as Exactly Sparse Gaussian Process Regression

    Full text link
    In this paper, we revisit batch state estimation through the lens of Gaussian process (GP) regression. We consider continuous-discrete estimation problems wherein a trajectory is viewed as a one-dimensional GP, with time as the independent variable. Our continuous-time prior can be defined by any nonlinear, time-varying stochastic differential equation driven by white noise; this allows the possibility of smoothing our trajectory estimates using a variety of vehicle dynamics models (e.g., `constant-velocity'). We show that this class of prior results in an inverse kernel matrix (i.e., covariance matrix between all pairs of measurement times) that is exactly sparse (block-tridiagonal) and that this can be exploited to carry out GP regression (and interpolation) very efficiently. When the prior is based on a linear, time-varying stochastic differential equation and the measurement model is also linear, this GP approach is equivalent to classical, discrete-time smoothing (at the measurement times); when a nonlinearity is present, we iterate over the whole trajectory to maximize accuracy. We test the approach experimentally on a simultaneous trajectory estimation and mapping problem using a mobile robot dataset.Comment: Submitted to Autonomous Robots on 20 November 2014, manuscript # AURO-D-14-00185, 16 pages, 7 figure
    corecore