27,925 research outputs found
Multilevel Markov Chain Monte Carlo Method for High-Contrast Single-Phase Flow Problems
In this paper we propose a general framework for the uncertainty
quantification of quantities of interest for high-contrast single-phase flow
problems. It is based on the generalized multiscale finite element method
(GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a
hierarchy of approximations of different resolution, whereas the latter gives
an efficient way to estimate quantities of interest using samples on different
levels. The number of basis functions in the online GMsFEM stage can be varied
to determine the solution resolution and the computational cost, and to
efficiently generate samples at different levels. In particular, it is cheap to
generate samples on coarse grids but with low resolution, and it is expensive
to generate samples on fine grids with high accuracy. By suitably choosing the
number of samples at different levels, one can leverage the expensive
computation in larger fine-grid spaces toward smaller coarse-grid spaces, while
retaining the accuracy of the final Monte Carlo estimate. Further, we describe
a multilevel Markov chain Monte Carlo method, which sequentially screens the
proposal with different levels of approximations and reduces the number of
evaluations required on fine grids, while combining the samples at different
levels to arrive at an accurate estimate. The framework seamlessly integrates
the multiscale features of the GMsFEM with the multilevel feature of the MLMC
methods following the work in \cite{ketelson2013}, and our numerical
experiments illustrate its efficiency and accuracy in comparison with standard
Monte Carlo estimates.Comment: 29 pages, 6 figure
One-bit Distributed Sensing and Coding for Field Estimation in Sensor Networks
This paper formulates and studies a general distributed field reconstruction
problem using a dense network of noisy one-bit randomized scalar quantizers in
the presence of additive observation noise of unknown distribution. A
constructive quantization, coding, and field reconstruction scheme is developed
and an upper-bound to the associated mean squared error (MSE) at any point and
any snapshot is derived in terms of the local spatio-temporal smoothness
properties of the underlying field. It is shown that when the noise, sensor
placement pattern, and the sensor schedule satisfy certain weak technical
requirements, it is possible to drive the MSE to zero with increasing sensor
density at points of field continuity while ensuring that the per-sensor
bitrate and sensing-related network overhead rate simultaneously go to zero.
The proposed scheme achieves the order-optimal MSE versus sensor density
scaling behavior for the class of spatially constant spatio-temporal fields.Comment: Fixed typos, otherwise same as V2. 27 pages (in one column review
format), 4 figures. Submitted to IEEE Transactions on Signal Processing.
Current version is updated for journal submission: revised author list,
modified formulation and framework. Previous version appeared in Proceedings
of Allerton Conference On Communication, Control, and Computing 200
High-resolution distributed sampling of bandlimited fields with low-precision sensors
The problem of sampling a discrete-time sequence of spatially bandlimited
fields with a bounded dynamic range, in a distributed,
communication-constrained, processing environment is addressed. A central unit,
having access to the data gathered by a dense network of fixed-precision
sensors, operating under stringent inter-node communication constraints, is
required to reconstruct the field snapshots to maximum accuracy. Both
deterministic and stochastic field models are considered. For stochastic
fields, results are established in the almost-sure sense. The feasibility of
having a flexible tradeoff between the oversampling rate (sensor density) and
the analog-to-digital converter (ADC) precision, while achieving an exponential
accuracy in the number of bits per Nyquist-interval per snapshot is
demonstrated. This exposes an underlying ``conservation of bits'' principle:
the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed
along the amplitude axis (sensor-precision) and space (sensor density) in an
almost arbitrary discrete-valued manner, while retaining the same (exponential)
distortion-rate characteristics. Achievable information scaling laws for field
reconstruction over a bounded region are also derived: With N one-bit sensors
per Nyquist-interval, Nyquist-intervals, and total network
bitrate (per-sensor bitrate ), the maximum pointwise distortion goes to zero as
or . This is shown to be possible
with only nearest-neighbor communication, distributed coding, and appropriate
interpolation algorithms. For a fixed, nonzero target distortion, the number of
fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal
Processing and re-submitted to the IEEE Transactions on Information Theor
Information-theoretic analysis of MIMO channel sounding
The large majority of commercially available multiple-input multiple-output
(MIMO) radio channel measurement devices (sounders) is based on time-division
multiplexed switching (TDMS) of a single transmit/receive radio-frequency chain
into the elements of a transmit/receive antenna array. While being
cost-effective, such a solution can cause significant measurement errors due to
phase noise and frequency offset in the local oscillators. In this paper, we
systematically analyze the resulting errors and show that, in practice,
overestimation of channel capacity by several hundred percent can occur.
Overestimation is caused by phase noise (and to a lesser extent frequency
offset) leading to an increase of the MIMO channel rank. Our analysis
furthermore reveals that the impact of phase errors is, in general, most
pronounced if the physical channel has low rank (typical for line-of-sight or
poor scattering scenarios). The extreme case of a rank-1 physical channel is
analyzed in detail. Finally, we present measurement results obtained from a
commercially employed TDMS-based MIMO channel sounder. In the light of the
findings of this paper, the results obtained through MIMO channel measurement
campaigns using TDMS-based channel sounders should be interpreted with great
care.Comment: 99 pages, 14 figures, submitted to IEEE Transactions on Information
Theor
An error indicator-based adaptive reduced order model for nonlinear structural mechanics -- application to high-pressure turbine blades
The industrial application motivating this work is the fatigue computation of
aircraft engines' high-pressure turbine blades. The material model involves
nonlinear elastoviscoplastic behavior laws, for which the parameters depend on
the temperature. For this application, the temperature loading is not
accurately known and can reach values relatively close to the creep
temperature: important nonlinear effects occur and the solution strongly
depends on the used thermal loading. We consider a nonlinear reduced order
model able to compute, in the exploitation phase, the behavior of the blade for
a new temperature field loading. The sensitivity of the solution to the
temperature makes {the classical unenriched proper orthogonal decomposition
method} fail. In this work, we propose a new error indicator, quantifying the
error made by the reduced order model in computational complexity independent
of the size of the high-fidelity reference model. In our framework, when the
{error indicator} becomes larger than a given tolerance, the reduced order
model is updated using one time step solution of the high-fidelity reference
model. The approach is illustrated on a series of academic test cases and
applied on a setting of industrial complexity involving 5 million degrees of
freedom, where the whole procedure is computed in parallel with distributed
memory
Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods
In this paper, we discuss a general multiscale model reduction framework
based on multiscale finite element methods. We give a brief overview of related
multiscale methods. Due to page limitations, the overview focuses on a few
related methods and is not intended to be comprehensive. We present a general
adaptive multiscale model reduction framework, the Generalized Multiscale
Finite Element Method. Besides the method's basic outline, we discuss some
important ingredients needed for the method's success. We also discuss several
applications. The proposed method allows performing local model reduction in
the presence of high contrast and no scale separation
Cluster-based reduced-order modelling of a mixing layer
We propose a novel cluster-based reduced-order modelling (CROM) strategy of
unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's
group (Burkardt et al. 2006) and and transition matrix models introduced in
fluid dynamics in Eckhardt's group (Schneider et al. 2007). CROM constitutes a
potential alternative to POD models and generalises the Ulam-Galerkin method
classically used in dynamical systems to determine a finite-rank approximation
of the Perron-Frobenius operator. The proposed strategy processes a
time-resolved sequence of flow snapshots in two steps. First, the snapshot data
are clustered into a small number of representative states, called centroids,
in the state space. These centroids partition the state space in complementary
non-overlapping regions (centroidal Voronoi cells). Departing from the standard
algorithm, the probabilities of the clusters are determined, and the states are
sorted by analysis of the transition matrix. Secondly, the transitions between
the states are dynamically modelled using a Markov process. Physical mechanisms
are then distilled by a refined analysis of the Markov process, e.g. using
finite-time Lyapunov exponent and entropic methods. This CROM framework is
applied to the Lorenz attractor (as illustrative example), to velocity fields
of the spatially evolving incompressible mixing layer and the three-dimensional
turbulent wake of a bluff body. For these examples, CROM is shown to identify
non-trivial quasi-attractors and transition processes in an unsupervised
manner. CROM has numerous potential applications for the systematic
identification of physical mechanisms of complex dynamics, for comparison of
flow evolution models, for the identification of precursors to desirable and
undesirable events, and for flow control applications exploiting nonlinear
actuation dynamics.Comment: 48 pages, 30 figures. Revised version with additional material.
Accepted for publication in Journal of Fluid Mechanic
Randomized Dynamic Mode Decomposition
This paper presents a randomized algorithm for computing the near-optimal
low-rank dynamic mode decomposition (DMD). Randomized algorithms are emerging
techniques to compute low-rank matrix approximations at a fraction of the cost
of deterministic algorithms, easing the computational challenges arising in the
area of `big data'. The idea is to derive a small matrix from the
high-dimensional data, which is then used to efficiently compute the dynamic
modes and eigenvalues. The algorithm is presented in a modular probabilistic
framework, and the approximation quality can be controlled via oversampling and
power iterations. The effectiveness of the resulting randomized DMD algorithm
is demonstrated on several benchmark examples of increasing complexity,
providing an accurate and efficient approach to extract spatiotemporal coherent
structures from big data in a framework that scales with the intrinsic rank of
the data, rather than the ambient measurement dimension. For this work we
assume that the dynamics of the problem under consideration is evolving on a
low-dimensional subspace that is well characterized by a fast decaying singular
value spectrum
- …