41,118 research outputs found

    Surrogate Accelerated Bayesian Inversion for the Determination of the Thermal Diffusivity of a Material

    Full text link
    Determination of the thermal properties of a material is an important task in many scientific and engineering applications. How a material behaves when subjected to high or fluctuating temperatures can be critical to the safety and longevity of a system's essential components. The laser flash experiment is a well-established technique for indirectly measuring the thermal diffusivity, and hence the thermal conductivity, of a material. In previous works, optimization schemes have been used to find estimates of the thermal conductivity and other quantities of interest which best fit a given model to experimental data. Adopting a Bayesian approach allows for prior beliefs about uncertain model inputs to be conditioned on experimental data to determine a posterior distribution, but probing this distribution using sampling techniques such as Markov chain Monte Carlo methods can be incredibly computationally intensive. This difficulty is especially true for forward models consisting of time-dependent partial differential equations. We pose the problem of determining the thermal conductivity of a material via the laser flash experiment as a Bayesian inverse problem in which the laser intensity is also treated as uncertain. We introduce a parametric surrogate model that takes the form of a stochastic Galerkin finite element approximation, also known as a generalized polynomial chaos expansion, and show how it can be used to sample efficiently from the approximate posterior distribution. This approach gives access not only to the sought-after estimate of the thermal conductivity but also important information about its relationship to the laser intensity, and information for uncertainty quantification. We also investigate the effects of the spatial profile of the laser on the estimated posterior distribution for the thermal conductivity

    Informative Path Planning for Active Field Mapping under Localization Uncertainty

    Full text link
    Information gathering algorithms play a key role in unlocking the potential of robots for efficient data collection in a wide range of applications. However, most existing strategies neglect the fundamental problem of the robot pose uncertainty, which is an implicit requirement for creating robust, high-quality maps. To address this issue, we introduce an informative planning framework for active mapping that explicitly accounts for the pose uncertainty in both the mapping and planning tasks. Our strategy exploits a Gaussian Process (GP) model to capture a target environmental field given the uncertainty on its inputs. For planning, we formulate a new utility function that couples the localization and field mapping objectives in GP-based mapping scenarios in a principled way, without relying on any manually tuned parameters. Extensive simulations show that our approach outperforms existing strategies, with reductions in mean pose uncertainty and map error. We also present a proof of concept in an indoor temperature mapping scenario.Comment: 8 pages, 7 figures, submission (revised) to Robotics & Automation Letters (and IEEE International Conference on Robotics and Automation

    Bridging the Gap Between Training and Inference for Spatio-Temporal Forecasting

    Get PDF
    Spatio-temporal sequence forecasting is one of the fundamental tasks in spatio-temporal data mining. It facilitates many real world applications such as precipitation nowcasting, citywide crowd flow prediction and air pollution forecasting. Recently, a few Seq2Seq based approaches have been proposed, but one of the drawbacks of Seq2Seq models is that, small errors can accumulate quickly along the generated sequence at the inference stage due to the different distributions of training and inference phase. That is because Seq2Seq models minimise single step errors only during training, however the entire sequence has to be generated during the inference phase which generates a discrepancy between training and inference. In this work, we propose a novel curriculum learning based strategy named Temporal Progressive Growing Sampling to effectively bridge the gap between training and inference for spatio-temporal sequence forecasting, by transforming the training process from a fully-supervised manner which utilises all available previous ground-truth values to a less-supervised manner which replaces some of the ground-truth context with generated predictions. To do that we sample the target sequence from midway outputs from intermediate models trained with bigger timescales through a carefully designed decaying strategy. Experimental results demonstrate that our proposed method better models long term dependencies and outperforms baseline approaches on two competitive datasets.Comment: ECAI 2020 Accepted, preprin

    Improving self-calibration

    Full text link
    Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore better schemes -- in sense of minimal square error -- can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that non-parametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.Comment: 17 pages, 3 figures, revised version, title change

    Atrial signal extraction in atrial fibrillation ECGs exploiting spatial constraints

    Get PDF
    International audienceThe accuracy in the extraction of the atrial activity (AA) from electrocardiogram (ECG) signals recorded during atrial fibrillation (AF) episodes plays an important role in the analysis and characterization of atrial arrhhythmias. The present contribution puts forward a new method for AA signal automatic extraction based on a blind source separation (BSS) formulation that exploits spatial information about the AA during the T-Q segments. This prior knowledge is used to optimize the spectral content of the AA signal estimated by BSS on the full ECG recording. The comparative performance of the method is evaluated on real data recorded from AF sufferers. The AA extraction quality of the proposed technique is comparable to that of previous algorithms, but is achieved at a reduced cost and without manual selection of parameters
    • …
    corecore