13,955 research outputs found
A User-Friendly Hybrid Sparse Matrix Class in C++
When implementing functionality which requires sparse matrices, there are
numerous storage formats to choose from, each with advantages and
disadvantages. To achieve good performance, several formats may need to be used
in one program, requiring explicit selection and conversion between the
formats. This can be both tedious and error-prone, especially for non-expert
users. Motivated by this issue, we present a user-friendly sparse matrix class
for the C++ language, with a high-level application programming interface
deliberately similar to the widely used MATLAB language. The class internally
uses two main approaches to achieve efficient execution: (i) a hybrid storage
framework, which automatically and seamlessly switches between three underlying
storage formats (compressed sparse column, coordinate list, Red-Black tree)
depending on which format is best suited for specific operations, and (ii)
template-based meta-programming to automatically detect and optimise execution
of common expression patterns. To facilitate relatively quick conversion of
research code into production environments, the class and its associated
functions provide a suite of essential sparse linear algebra functionality
(eg., arithmetic operations, submatrix manipulation) as well as high-level
functions for sparse eigendecompositions and linear equation solvers. The
latter are achieved by providing easy-to-use abstractions of the low-level
ARPACK and SuperLU libraries. The source code is open and provided under the
permissive Apache 2.0 license, allowing unencumbered use in commercial
products
Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation
Despite the importance of sparse matrices in numerous fields of science,
software implementations remain difficult to use for non-expert users,
generally requiring the understanding of underlying details of the chosen
sparse matrix storage format. In addition, to achieve good performance, several
formats may need to be used in one program, requiring explicit selection and
conversion between the formats. This can be both tedious and error-prone,
especially for non-expert users. Motivated by these issues, we present a
user-friendly and open-source sparse matrix class for the C++ language, with a
high-level application programming interface deliberately similar to the widely
used MATLAB language. This facilitates prototyping directly in C++ and aids the
conversion of research code into production environments. The class internally
uses two main approaches to achieve efficient execution: (i) a hybrid storage
framework, which automatically and seamlessly switches between three underlying
storage formats (compressed sparse column, Red-Black tree, coordinate list)
depending on which format is best suited and/or available for specific
operations, and (ii) a template-based meta-programming framework to
automatically detect and optimise execution of common expression patterns.
Empirical evaluations on large sparse matrices with various densities of
non-zero elements demonstrate the advantages of the hybrid storage framework
and the expression optimisation mechanism.Comment: extended and revised version of an earlier conference paper
arXiv:1805.0338
Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation
Despite the importance of sparse matrices in numerous fields of science,
software implementations remain difficult to use for non-expert users,
generally requiring the understanding of underlying details of the chosen
sparse matrix storage format. In addition, to achieve good performance, several
formats may need to be used in one program, requiring explicit selection and
conversion between the formats. This can be both tedious and error-prone,
especially for non-expert users. Motivated by these issues, we present a
user-friendly and open-source sparse matrix class for the C++ language, with a
high-level application programming interface deliberately similar to the widely
used MATLAB language. This facilitates prototyping directly in C++ and aids the
conversion of research code into production environments. The class internally
uses two main approaches to achieve efficient execution: (i) a hybrid storage
framework, which automatically and seamlessly switches between three underlying
storage formats (compressed sparse column, Red-Black tree, coordinate list)
depending on which format is best suited and/or available for specific
operations, and (ii) a template-based meta-programming framework to
automatically detect and optimise execution of common expression patterns.
Empirical evaluations on large sparse matrices with various densities of
non-zero elements demonstrate the advantages of the hybrid storage framework
and the expression optimisation mechanism.Comment: extended and revised version of an earlier conference paper
arXiv:1805.0338
Elements of Design for Containers and Solutions in the LinBox Library
We describe in this paper new design techniques used in the \cpp exact linear
algebra library \linbox, intended to make the library safer and easier to use,
while keeping it generic and efficient. First, we review the new simplified
structure for containers, based on our \emph{founding scope allocation} model.
We explain design choices and their impact on coding: unification of our matrix
classes, clearer model for matrices and submatrices, \etc Then we present a
variation of the \emph{strategy} design pattern that is comprised of a
controller--plugin system: the controller (solution) chooses among plug-ins
(algorithms) that always call back the controllers for subtasks. We give
examples using the solution \mul. Finally we present a benchmark architecture
that serves two purposes: Providing the user with easier ways to produce
graphs; Creating a framework for automatically tuning the library and
supporting regression testing.Comment: 8 pages, 4th International Congress on Mathematical Software, Seoul :
Korea, Republic Of (2014
Recommended from our members
Preparing sparse solvers for exascale computing.
Sparse solvers provide essential functionality for a wide variety of scientific applications. Highly parallel sparse solvers are essential for continuing advances in high-fidelity, multi-physics and multi-scale simulations, especially as we target exascale platforms. This paper describes the challenges, strategies and progress of the US Department of Energy Exascale Computing project towards providing sparse solvers for exascale computing platforms. We address the demands of systems with thousands of high-performance node devices where exposing concurrency, hiding latency and creating alternative algorithms become essential. The efforts described here are works in progress, highlighting current success and upcoming challenges. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'
StocHy: automated verification and synthesis of stochastic processes
StocHy is a software tool for the quantitative analysis of discrete-time
stochastic hybrid systems (SHS). StocHy accepts a high-level description of
stochastic models and constructs an equivalent SHS model. The tool allows to
(i) simulate the SHS evolution over a given time horizon; and to automatically
construct formal abstractions of the SHS. Abstractions are then employed for
(ii) formal verification or (iii) control (policy, strategy) synthesis. StocHy
allows for modular modelling, and has separate simulation, verification and
synthesis engines, which are implemented as independent libraries. This allows
for libraries to be easily used and for extensions to be easily built. The tool
is implemented in C++ and employs manipulations based on vector calculus, the
use of sparse matrices, the symbolic construction of probabilistic kernels, and
multi-threading. Experiments show StocHy's markedly improved performance when
compared to existing abstraction-based approaches: in particular, StocHy beats
state-of-the-art tools in terms of precision (abstraction error) and
computational effort, and finally attains scalability to large-sized models (12
continuous dimensions). StocHy is available at www.gitlab.com/natchi92/StocHy
GHOST: Building blocks for high performance sparse linear algebra on heterogeneous systems
While many of the architectural details of future exascale-class high
performance computer systems are still a matter of intense research, there
appears to be a general consensus that they will be strongly heterogeneous,
featuring "standard" as well as "accelerated" resources. Today, such resources
are available as multicore processors, graphics processing units (GPUs), and
other accelerators such as the Intel Xeon Phi. Any software infrastructure that
claims usefulness for such environments must be able to meet their inherent
challenges: massive multi-level parallelism, topology, asynchronicity, and
abstraction. The "General, Hybrid, and Optimized Sparse Toolkit" (GHOST) is a
collection of building blocks that targets algorithms dealing with sparse
matrix representations on current and future large-scale systems. It implements
the "MPI+X" paradigm, has a pure C interface, and provides hybrid-parallel
numerical kernels, intelligent resource management, and truly heterogeneous
parallelism for multicore CPUs, Nvidia GPUs, and the Intel Xeon Phi. We
describe the details of its design with respect to the challenges posed by
modern heterogeneous supercomputers and recent algorithmic developments.
Implementation details which are indispensable for achieving high efficiency
are pointed out and their necessity is justified by performance measurements or
predictions based on performance models. The library code and several
applications are available as open source. We also provide instructions on how
to make use of GHOST in existing software packages, together with a case study
which demonstrates the applicability and performance of GHOST as a component
within a larger software stack.Comment: 32 pages, 11 figure
SDLS: a Matlab package for solving conic least-squares problems
This document is an introduction to the Matlab package SDLS (Semi-Definite
Least-Squares) for solving least-squares problems over convex symmetric cones.
The package is shortly presented through the addressed problem, a sketch of the
implemented algorithm, the syntax and calling sequences, a simple numerical
example and some more advanced features. The implemented method consists in
solving the dual problem with a quasi-Newton algorithm. We note that SDLS is
not the most competitive implementation of this algorithm: efficient, robust,
commercial implementations are available (contact the authors). Our main goal
with this Matlab SDLS package is to provide a simple, user-friendly software
for solving and experimenting with semidefinite least-squares problems. Up to
our knowledge, no such freeware exists at this date
Learning Large-Scale Bayesian Networks with the sparsebn Package
Learning graphical models from data is an important problem with wide
applications, ranging from genomics to the social sciences. Nowadays datasets
often have upwards of thousands---sometimes tens or hundreds of thousands---of
variables and far fewer samples. To meet this challenge, we have developed a
new R package called sparsebn for learning the structure of large, sparse
graphical models with a focus on Bayesian networks. While there are many
existing software packages for this task, this package focuses on the unique
setting of learning large networks from high-dimensional data, possibly with
interventions. As such, the methods provided place a premium on scalability and
consistency in a high-dimensional setting. Furthermore, in the presence of
interventions, the methods implemented here achieve the goal of learning a
causal network from data. Additionally, the sparsebn package is fully
compatible with existing software packages for network analysis.Comment: To appear in the Journal of Statistical Software, 39 pages, 7 figure
- …