15,157 research outputs found
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
Modeling Non-Stationary Processes Through Dimension Expansion
In this paper, we propose a novel approach to modeling nonstationary spatial
fields. The proposed method works by expanding the geographic plane over which
these processes evolve into higher dimensional spaces, transforming and
clarifying complex patterns in the physical plane. By combining aspects of
multi-dimensional scaling, group lasso, and latent variables models, a
dimensionally sparse projection is found in which the originally nonstationary
field exhibits stationarity. Following a comparison with existing methods in a
simulated environment, dimension expansion is studied on a classic test-bed
data set historically used to study nonstationary models. Following this, we
explore the use of dimension expansion in modeling air pollution in the United
Kingdom, a process known to be strongly influenced by rural/urban effects,
amongst others, which gives rise to a nonstationary field
Generalized Bregman Divergence and Gradient of Mutual Information for Vector Poisson Channels
We investigate connections between information-theoretic and
estimation-theoretic quantities in vector Poisson channel models. In
particular, we generalize the gradient of mutual information with respect to
key system parameters from the scalar to the vector Poisson channel model. We
also propose, as another contribution, a generalization of the classical
Bregman divergence that offers a means to encapsulate under a unifying
framework the gradient of mutual information results for scalar and vector
Poisson and Gaussian channel models. The so-called generalized Bregman
divergence is also shown to exhibit various properties akin to the properties
of the classical version. The vector Poisson channel model is drawing
considerable attention in view of its application in various domains: as an
example, the availability of the gradient of mutual information can be used in
conjunction with gradient descent methods to effect compressive-sensing
projection designs in emerging X-ray and document classification applications
Alternating Minimization, Scaling Algorithms, and the Null-Cone Problem from Invariant Theory
Alternating minimization heuristics seek to solve a (difficult) global optimization task through iteratively solving a sequence of (much easier) local optimization tasks on different parts (or blocks) of the input parameters. While popular and widely applicable, very few examples of this heuristic are rigorously shown to converge to optimality, and even fewer to do so efficiently.
In this paper we present a general framework which is amenable to rigorous analysis, and expose its applicability. Its main feature is that the local optimization domains are each a group of invertible matrices, together naturally acting on tensors, and the optimization problem is minimizing the norm of an input tensor under this joint action. The solution of this optimization problem captures a basic problem in Invariant Theory, called the null-cone problem.
This algebraic framework turns out to encompass natural computational problems in combinatorial optimization, algebra, analysis, quantum information theory, and geometric complexity theory. It includes and extends to high dimensions the recent advances on (2-dimensional) operator scaling.
Our main result is a fully polynomial time approximation scheme for this general problem, which may be viewed as a multi-dimensional scaling algorithm. This directly leads to progress on some of the problems in the areas above, and a unified view of others. We explain how faster convergence of an algorithm for the same problem will allow resolving central open problems.
Our main techniques come from Invariant Theory, and include its rich non-commutative duality theory, and new bounds on the bitsizes of coefficients of invariant polynomials. They enrich the algorithmic toolbox of this very computational field of mathematics, and are directly related to some challenges in geometric complexity theory (GCT)
The future of computing beyond Moore's Law.
Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
- …