9,700 research outputs found
GraphBLAST: A High-Performance Linear Algebra-based Graph Framework on the GPU
High-performance implementations of graph algorithms are challenging to
implement on new parallel hardware such as GPUs because of three challenges:
(1) the difficulty of coming up with graph building blocks, (2) load imbalance
on parallel hardware, and (3) graph problems having low arithmetic intensity.
To address some of these challenges, GraphBLAS is an innovative, on-going
effort by the graph analytics community to propose building blocks based on
sparse linear algebra, which will allow graph algorithms to be expressed in a
performant, succinct, composable and portable manner. In this paper, we examine
the performance challenges of a linear-algebra-based approach to building graph
frameworks and describe new design principles for overcoming these bottlenecks.
Among the new design principles is exploiting input sparsity, which allows
users to write graph algorithms without specifying push and pull direction.
Exploiting output sparsity allows users to tell the backend which values of the
output in a single vectorized computation they do not want computed.
Load-balancing is an important feature for balancing work amongst parallel
workers. We describe the important load-balancing features for handling graphs
with different characteristics. The design principles described in this paper
have been implemented in "GraphBLAST", the first high-performance linear
algebra-based graph framework on NVIDIA GPUs that is open-source. The results
show that on a single GPU, GraphBLAST has on average at least an order of
magnitude speedup over previous GraphBLAS implementations SuiteSparse and GBTL,
comparable performance to the fastest GPU hardwired primitives and
shared-memory graph frameworks Ligra and Gunrock, and better performance than
any other GPU graph framework, while offering a simpler and more concise
programming model.Comment: 50 pages, 14 figures, 14 table
The Mathematical Universe
I explore physics implications of the External Reality Hypothesis (ERH) that
there exists an external physical reality completely independent of us humans.
I argue that with a sufficiently broad definition of mathematics, it implies
the Mathematical Universe Hypothesis (MUH) that our physical world is an
abstract mathematical structure. I discuss various implications of the ERH and
MUH, ranging from standard physics topics like symmetries, irreducible
representations, units, free parameters, randomness and initial conditions to
broader issues like consciousness, parallel universes and Godel incompleteness.
I hypothesize that only computable and decidable (in Godel's sense) structures
exist, which alleviates the cosmological measure problem and help explain why
our physical laws appear so simple. I also comment on the intimate relation
between mathematical structures, computations, simulations and physical
systems.Comment: Replaced to match accepted Found. Phys. version, 31 pages, 5 figs;
more details at http://space.mit.edu/home/tegmark/toe.htm
Recommended from our members
Don't mind the gap: Bridging network-wide objectives and device-level configurations
We reflect on the historical context that lead to Propane, a high-level language and compiler to help network operators bridge the gap between network-wide routing objectives and low-level configurations of devices that run complex, distributed protocols. We also highlight the primary contributions that Propane made to the networking literature and describe ongoing challenges. We conclude with an important lesson learned from the experience
The Future of Computation
``The purpose of life is to obtain knowledge, use it to live with as much
satisfaction as possible, and pass it on with improvements and modifications to
the next generation.'' This may sound philosophical, and the interpretation of
words may be subjective, yet it is fairly clear that this is what all living
organisms--from bacteria to human beings--do in their life time. Indeed, this
can be adopted as the information theoretic definition of life. Over billions
of years, biological evolution has experimented with a wide range of physical
systems for acquiring, processing and communicating information. We are now in
a position to make the principles behind these systems mathematically precise,
and then extend them as far as laws of physics permit. Therein lies the future
of computation, of ourselves, and of life.Comment: 7 pages, Revtex. Invited lecture at the Workshop on Quantum
Information, Computation and Communication (QICC-2005), IIT Kharagpur, India,
February 200
Computable decision making on the reals and other spaces via partiality and nondeterminism
Though many safety-critical software systems use floating point to represent
real-world input and output, programmers usually have idealized versions in
mind that compute with real numbers. Significant deviations from the ideal can
cause errors and jeopardize safety. Some programming systems implement exact
real arithmetic, which resolves this matter but complicates others, such as
decision making. In these systems, it is impossible to compute (total and
deterministic) discrete decisions based on connected spaces such as
. We present programming-language semantics based on constructive
topology with variants allowing nondeterminism and/or partiality. Either
nondeterminism or partiality suffices to allow computable decision making on
connected spaces such as . We then introduce pattern matching on
spaces, a language construct for creating programs on spaces, generalizing
pattern matching in functional programming, where patterns need not represent
decidable predicates and also may overlap or be inexhaustive, giving rise to
nondeterminism or partiality, respectively. Nondeterminism and/or partiality
also yield formal logics for constructing approximate decision procedures. We
implemented these constructs in the Marshall language for exact real
arithmetic.Comment: This is an extended version of a paper due to appear in the
proceedings of the ACM/IEEE Symposium on Logic in Computer Science (LICS) in
July 201
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
- …