1,259 research outputs found
Likelihood-informed dimension reduction for nonlinear inverse problems
The intrinsic dimensionality of an inverse problem is affected by prior
information, the accuracy and number of observations, and the smoothing
properties of the forward operator. From a Bayesian perspective, changes from
the prior to the posterior may, in many problems, be confined to a relatively
low-dimensional subspace of the parameter space. We present a dimension
reduction approach that defines and identifies such a subspace, called the
"likelihood-informed subspace" (LIS), by characterizing the relative influences
of the prior and the likelihood over the support of the posterior distribution.
This identification enables new and more efficient computational methods for
Bayesian inference with nonlinear forward models and Gaussian priors. In
particular, we approximate the posterior distribution as the product of a
lower-dimensional posterior defined on the LIS and the prior distribution
marginalized onto the complementary subspace. Markov chain Monte Carlo sampling
can then proceed in lower dimensions, with significant gains in computational
efficiency. We also introduce a Rao-Blackwellization strategy that
de-randomizes Monte Carlo estimates of posterior expectations for additional
variance reduction. We demonstrate the efficiency of our methods using two
numerical examples: inference of permeability in a groundwater system governed
by an elliptic PDE, and an atmospheric remote sensing problem based on Global
Ozone Monitoring System (GOMOS) observations
Recommended from our members
Deploying Web-based Visual Exploration Tools on the Grid
We discuss a web-based portal for the exploration, encapsulation, and dissemination of visualization results over the Grid. This portal integrates three components: an interface client for structured visualization exploration, a visualization web application to manage the generation and capture of the visualization results, and a centralized portal application server to access and manage grid resources. Our approach uses standard web technologies to make the system accessible with minimal user setup. We demonstrate the usefulness of the developed system using an example for Adaptive Mesh Refinement (AMR) data visualization
On the benefits of tasking with OpenMP
Tasking promises a model to program parallel applications that provides intuitive semantics. In the case of tasks with dependences, it also promises better load balancing by removing global synchronizations (barriers), and potential for improved locality. Still, the adoption of tasking in production HPC codes has been slow. Despite OpenMP supporting tasks, most codes rely on worksharing-loop constructs alongside MPI primitives. This paper provides insights on the benefits of tasking over the worksharing-loop model by reporting on the experience of taskifying an adaptive mesh refinement proxy application: miniAMR. The performance evaluation shows the taskified implementation being 15–30% faster than the loop-parallel one for certain thread counts across four systems, three architectures and four compilers thanks to better load balancing and system utilization. Dynamic scheduling of loops narrows the gap but still falls short of tasking due to serial sections between loops. Locality improvements are incidental due to the lack of locality-aware scheduling. Overall, the introduction of asynchrony with tasking lives up to its promises, provided that programmers parallelize beyond individual loops and across application phases.Peer ReviewedPostprint (author's final draft
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement
Adaptive mesh refinement (AMR) is necessary for efficient finite element
simulations of complex physical phenomenon, as it allocates limited
computational budget based on the need for higher or lower resolution, which
varies over space and time. We present a novel formulation of AMR as a
fully-cooperative Markov game, in which each element is an independent agent
who makes refinement and de-refinement choices based on local information. We
design a novel deep multi-agent reinforcement learning (MARL) algorithm called
Value Decomposition Graph Network (VDGN), which solves the two core challenges
that AMR poses for MARL: posthumous credit assignment due to agent creation and
deletion, and unstructured observations due to the diversity of mesh
geometries. For the first time, we show that MARL enables anticipatory
refinement of regions that will encounter complex features at future times,
thereby unlocking entirely new regions of the error-cost objective landscape
that are inaccessible by traditional methods based on local error estimators.
Comprehensive experiments show that VDGN policies significantly outperform
error threshold-based policies in global error and cost metrics. We show that
learned policies generalize to test problems with physical features, mesh
geometries, and longer simulation times that were not seen in training. We also
extend VDGN with multi-objective optimization capabilities to find the Pareto
front of the tradeoff between cost and error.Comment: 24 pages, 18 figure
On optimization of heterogeneous materials for enhanced resistance to bulk fracture
We propose a novel approach to optimize the design of heterogeneous
materials, with the goal of enhancing their effective fracture toughness under
mode-I loading. The method employs a Gaussian processes-based Bayesian
optimization framework to determine the optimal shapes and locations of stiff
elliptical inclusions within a periodic microstructure in two dimensions. To
model crack propagation, the phase-field fracture method with an efficient
interior-point monolithic solver and adaptive mesh refinement, is used. To
account for the high sensitivity of fracture properties to initial crack
location with respect to heterogeneities, we consider multiple cases of initial
crack and optimize the material for the worst-case scenario. We also impose a
minimum clearance constraint between the inclusions to ensure design
feasibility. Numerical experiments demonstrate that the method significantly
improves the fracture toughness of the material compared to the homogeneous
case
- …