18,745 research outputs found

    Sensitivity analysis based dimension reduction of multiscale models

    Get PDF
    In this paper, the sensitivity analysis of a single scale model is employed in order to reduce the input dimensionality of the related multiscale model, in this way, improving the efficiency of its uncertainty estimation. The approach is illustrated with two examples: a reaction model and the standard Ornstein–Uhlenbeck process. Additionally, a counterexample shows that an uncertain input should not be excluded from uncertainty quantification without estimating the response sensitivity to this parameter. In particular, an analysis of the function defining the relation between single scale components is required to understand whether single scale sensitivity analysis can be used to reduce the dimensionality of the overall multiscale model input space

    Model Reduction for Multiscale Lithium-Ion Battery Simulation

    Full text link
    In this contribution we are concerned with efficient model reduction for multiscale problems arising in lithium-ion battery modeling with spatially resolved porous electrodes. We present new results on the application of the reduced basis method to the resulting instationary 3D battery model that involves strong non-linearities due to Buttler-Volmer kinetics. Empirical operator interpolation is used to efficiently deal with this issue. Furthermore, we present the localized reduced basis multiscale method for parabolic problems applied to a thermal model of batteries with resolved porous electrodes. Numerical experiments are given that demonstrate the reduction capabilities of the presented approaches for these real world applications

    Kernel Analog Forecasting: Multiscale Test Problems

    Get PDF
    Data-driven prediction is becoming increasingly widespread as the volume of data available grows and as algorithmic development matches this growth. The nature of the predictions made, and the manner in which they should be interpreted, depends crucially on the extent to which the variables chosen for prediction are Markovian, or approximately Markovian. Multiscale systems provide a framework in which this issue can be analyzed. In this work kernel analog forecasting methods are studied from the perspective of data generated by multiscale dynamical systems. The problems chosen exhibit a variety of different Markovian closures, using both averaging and homogenization; furthermore, settings where scale-separation is not present and the predicted variables are non-Markovian, are also considered. The studies provide guidance for the interpretation of data-driven prediction methods when used in practice.Comment: 30 pages, 14 figures; clarified several ambiguous parts, added references, and a comparison with Lorenz' original method (Sec. 4.5

    Multiscale Dictionary Learning for Estimating Conditional Distributions

    Full text link
    Nonparametric estimation of the conditional distribution of a response given high-dimensional features is a challenging problem. It is important to allow not only the mean but also the variance and shape of the response density to change flexibly with features, which are massive-dimensional. We propose a multiscale dictionary learning model, which expresses the conditional response density as a convex combination of dictionary densities, with the densities used and their weights dependent on the path through a tree decomposition of the feature space. A fast graph partitioning algorithm is applied to obtain the tree decomposition, with Bayesian methods then used to adaptively prune and average over different sub-trees in a soft probabilistic manner. The algorithm scales efficiently to approximately one million features. State of the art predictive performance is demonstrated for toy examples and two neuroscience applications including up to a million features
    • …
    corecore