93 research outputs found
Numerical solution of the unsteady Navier-Stokes equation
The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws are discussed. These schemes share many desirable properties with total variation diminishing schemes, but TVD schemes have at most first-order accuracy, in the sense of truncation error, at extrema of the solution. In this paper a uniformly second-order approximation is constructed, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell
Optimal Data Collection For Informative Rankings Expose Well-Connected Graphs
Given a graph where vertices represent alternatives and arcs represent
pairwise comparison data, the statistical ranking problem is to find a
potential function, defined on the vertices, such that the gradient of the
potential function agrees with the pairwise comparisons. Our goal in this paper
is to develop a method for collecting data for which the least squares
estimator for the ranking problem has maximal Fisher information. Our approach,
based on experimental design, is to view data collection as a bi-level
optimization problem where the inner problem is the ranking problem and the
outer problem is to identify data which maximizes the informativeness of the
ranking. Under certain assumptions, the data collection problem decouples,
reducing to a problem of finding multigraphs with large algebraic connectivity.
This reduction of the data collection problem to graph-theoretic questions is
one of the primary contributions of this work. As an application, we study the
Yahoo! Movie user rating dataset and demonstrate that the addition of a small
number of well-chosen pairwise comparisons can significantly increase the
Fisher informativeness of the ranking. As another application, we study the
2011-12 NCAA football schedule and propose schedules with the same number of
games which are significantly more informative. Using spectral clustering
methods to identify highly-connected communities within the division, we argue
that the NCAA could improve its notoriously poor rankings by simply scheduling
more out-of-conference games.Comment: 31 pages, 10 figures, 3 table
PDE Generalization of In-Context Operator Networks: A Study on 1D Scalar Nonlinear Conservation Laws
Can we build a single large model for a wide range of PDE-related scientific
learning tasks? Can this model generalize to new PDEs, even of new forms,
without any fine-tuning? In-context operator learning and the corresponding
model In-Context Operator Networks (ICON) represent an initial exploration of
these questions. The capability of ICON regarding the first question has been
demonstrated previously. In this paper, we present a detailed methodology for
solving PDE problems with ICON, and show how a single ICON model can make
forward and reverse predictions for different equations with different strides,
provided with appropriately designed data prompts. We show the positive
evidence to the second question, i.e., ICON can generalize well to some PDEs
with new forms without any fine-tuning. This is exemplified through a study on
1D scalar nonlinear conservation laws, a family of PDEs with temporal
evolution. We also show how to broaden the range of problems that an ICON model
can address, by transforming functions and equations to ICON's capability
scope. We believe that the progress in this paper is a significant step towards
the goal of training a foundation model for PDE-related tasks under the
in-context operator learning framework
PDEs with Compressed Solutions
Sparsity plays a central role in recent developments in signal processing,
linear algebra, statistics, optimization, and other fields. In these
developments, sparsity is promoted through the addition of an norm (or
related quantity) as a constraint or penalty in a variational principle. We
apply this approach to partial differential equations that come from a
variational quantity, either by minimization (to obtain an elliptic PDE) or by
gradient flow (to obtain a parabolic PDE). Also, we show that some PDEs can be
rewritten in an form, such as the divisible sandpile problem and
signum-Gordon. Addition of an term in the variational principle leads to
a modified PDE where a subgradient term appears. It is known that modified PDEs
of this form will often have solutions with compact support, which corresponds
to the discrete solution being sparse. We show that this is advantageous
numerically through the use of efficient algorithms for solving based
problems.Comment: 21 pages, 15 figure
An L1 Penalty Method for General Obstacle Problems
We construct an efficient numerical scheme for solving obstacle problems in
divergence form. The numerical method is based on a reformulation of the
obstacle in terms of an L1-like penalty on the variational problem. The
reformulation is an exact regularizer in the sense that for large (but finite)
penalty parameter, we recover the exact solution. Our formulation is applied to
classical elliptic obstacle problems as well as some related free boundary
problems, for example the two-phase membrane problem and the Hele-Shaw model.
One advantage of the proposed method is that the free boundary inherent in the
obstacle problem arises naturally in our energy minimization without any need
for problem specific or complicated discretization. In addition, our scheme
also works for nonlinear variational inequalities arising from convex
minimization problems.Comment: 20 pages, 18 figure
Fine-Tune Language Models as Multi-Modal Differential Equation Solvers
In the growing domain of scientific machine learning, in-context operator
learning has shown notable potential in building foundation models, as in this
framework the model is trained to learn operators and solve differential
equations using prompted data, during the inference stage without weight
updates. However, the current model's overdependence on function data overlooks
the invaluable human insight into the operator. To address this, we present a
transformation of in-context operator learning into a multi-modal paradigm. In
particular, we take inspiration from the recent success of large language
models, and propose using "captions" to integrate human knowledge about the
operator, expressed through natural language descriptions and equations. Also,
we introduce a novel approach to train a language-model-like architecture, or
directly fine-tune existing language models, for in-context operator learning.
We beat the baseline on single-modal learning tasks, and also demonstrated the
effectiveness of multi-modal learning in enhancing performance and reducing
function data requirements. The proposed method not only significantly enhanced
the development of the in-context operator learning paradigm, but also created
a new path for the application of language models
- …