14,637 research outputs found
The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows
The Gauss--Newton with approximated tensors (GNAT) method is a nonlinear
model reduction method that operates on fully discretized computational models.
It achieves dimension reduction by a Petrov--Galerkin projection associated
with residual minimization; it delivers computational efficency by a
hyper-reduction procedure based on the `gappy POD' technique. Originally
presented in Ref. [1], where it was applied to implicit nonlinear
structural-dynamics models, this method is further developed here and applied
to the solution of a benchmark turbulent viscous flow problem. To begin, this
paper develops global state-space error bounds that justify the method's design
and highlight its advantages in terms of minimizing components of these error
bounds. Next, the paper introduces a `sample mesh' concept that enables a
distributed, computationally efficient implementation of the GNAT method in
finite-volume-based computational-fluid-dynamics (CFD) codes. The suitability
of GNAT for parameterized problems is highlighted with the solution of an
academic problem featuring moving discontinuities. Finally, the capability of
this method to reduce by orders of magnitude the core-hours required for
large-scale CFD computations, while preserving accuracy, is demonstrated with
the simulation of turbulent flow over the Ahmed body. For an instance of this
benchmark problem with over 17 million degrees of freedom, GNAT outperforms
several other nonlinear model-reduction methods, reduces the required
computational resources by more than two orders of magnitude, and delivers a
solution that differs by less than 1% from its high-dimensional counterpart
Parameter Learning of Logic Programs for Symbolic-Statistical Modeling
We propose a logical/mathematical framework for statistical parameter
learning of parameterized logic programs, i.e. definite clause programs
containing probabilistic facts with a parameterized distribution. It extends
the traditional least Herbrand model semantics in logic programming to
distribution semantics, possible world semantics with a probability
distribution which is unconditionally applicable to arbitrary logic programs
including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM
algorithm, the graphical EM algorithm, that runs for a class of parameterized
logic programs representing sequential decision processes where each decision
is exclusive and independent. It runs on a new data structure called support
graphs describing the logical relationship between observations and their
explanations, and learns parameters by computing inside and outside probability
generalized for logic programs. The complexity analysis shows that when
combined with OLDT search for all explanations for observations, the graphical
EM algorithm, despite its generality, has the same time complexity as existing
EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside
algorithm for PCFGs, and the one for singly connected Bayesian networks that
have been developed independently in each research field. Learning experiments
with PCFGs using two corpora of moderate size indicate that the graphical EM
algorithm can significantly outperform the Inside-Outside algorithm
Modeling of Phenomena and Dynamic Logic of Phenomena
Modeling of complex phenomena such as the mind presents tremendous
computational complexity challenges. Modeling field theory (MFT) addresses
these challenges in a non-traditional way. The main idea behind MFT is to match
levels of uncertainty of the model (also, problem or theory) with levels of
uncertainty of the evaluation criterion used to identify that model. When a
model becomes more certain, then the evaluation criterion is adjusted
dynamically to match that change to the model. This process is called the
Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes
of the mind and natural evolution. This paper provides a formal description of
DLP by specifying its syntax, semantics, and reasoning system. We also outline
links between DLP and other logical approaches. Computational complexity issues
that motivate this work are presented using an example of polynomial models
- …