7,897 research outputs found
Communication-optimal Parallel and Sequential Cholesky Decomposition
Numerical algorithms have two kinds of costs: arithmetic and communication,
by which we mean either moving data between levels of a memory hierarchy (in
the sequential case) or over a network connecting processors (in the parallel
case). Communication costs often dominate arithmetic costs, so it is of
interest to design algorithms minimizing communication. In this paper we first
extend known lower bounds on the communication cost (both for bandwidth and for
latency) of conventional (O(n^3)) matrix multiplication to Cholesky
factorization, which is used for solving dense symmetric positive definite
linear systems. Second, we compare the costs of various Cholesky decomposition
implementations to these lower bounds and identify the algorithms and data
structures that attain them. In the sequential case, we consider both the
two-level and hierarchical memory models. Combined with prior results in [13,
14, 15], this gives a set of communication-optimal algorithms for O(n^3)
implementations of the three basic factorizations of dense linear algebra: LU
with pivoting, QR and Cholesky. But it goes beyond this prior work on
sequential LU by optimizing communication for any number of levels of memory
hierarchy.Comment: 29 pages, 2 tables, 6 figure
Estimating the spatial and temporal distribution of species richness within Sequoia and Kings Canyon National Parks.
Evidence for significant losses of species richness or biodiversity, even within protected natural areas, is mounting. Managers are increasingly being asked to monitor biodiversity, yet estimating biodiversity is often prohibitively expensive. As a cost-effective option, we estimated the spatial and temporal distribution of species richness for four taxonomic groups (birds, mammals, herpetofauna (reptiles and amphibians), and plants) within Sequoia and Kings Canyon National Parks using only existing biological studies undertaken within the Parks and the Parks' long-term wildlife observation database. We used a rarefaction approach to model species richness for the four taxonomic groups and analyzed those groups by habitat type, elevation zone, and time period. We then mapped the spatial distributions of species richness values for the four taxonomic groups, as well as total species richness, for the Parks. We also estimated changes in species richness for birds, mammals, and herpetofauna since 1980. The modeled patterns of species richness either peaked at mid elevations (mammals, plants, and total species richness) or declined consistently with increasing elevation (herpetofauna and birds). Plants reached maximum species richness values at much higher elevations than did vertebrate taxa, and non-flying mammals reached maximum species richness values at higher elevations than did birds. Alpine plant communities, including sagebrush, had higher species richness values than did subalpine plant communities located below them in elevation. These results are supported by other papers published in the scientific literature. Perhaps reflecting climate change: birds and herpetofauna displayed declines in species richness since 1980 at low and middle elevations and mammals displayed declines in species richness since 1980 at all elevations
Strong Scaling of Matrix Multiplication Algorithms and Memory-Independent Communication Lower Bounds
A parallel algorithm has perfect strong scaling if its running time on P
processors is linear in 1/P, including all communication costs.
Distributed-memory parallel algorithms for matrix multiplication with perfect
strong scaling have only recently been found. One is based on classical matrix
multiplication (Solomonik and Demmel, 2011), and one is based on Strassen's
fast matrix multiplication (Ballard, Demmel, Holtz, Lipshitz, and Schwartz,
2012). Both algorithms scale perfectly, but only up to some number of
processors where the inter-processor communication no longer scales.
We obtain a memory-independent communication cost lower bound on classical
and Strassen-based distributed-memory matrix multiplication algorithms. These
bounds imply that no classical or Strassen-based parallel matrix multiplication
algorithm can strongly scale perfectly beyond the ranges already attained by
the two parallel algorithms mentioned above. The memory-independent bounds and
the strong scaling bounds generalize to other algorithms.Comment: 4 pages, 1 figur
Unlocking Thinking Through and About GPS
The article offers information about global positioning system (GPS) and geocaching which started when GPS satellite signals were opened to public in the U.S. in 2009.Topics discussed include the Small Footprints nature education trail in a college which consisted of seven outdoor learning stations on campus that is used through geocaching; benefits of using GPS in education and use of GPS to develop critical thinking skills
Nominalism In Mathematics - Modality And Naturalism
I defend modal nominalism in philosophy of mathematics - under which quantification over mathematical ontology is replaced with various modal assertions - against two sources of resistance: that modal nominalists face difficulties justifying the modal assertions that figure in their theories, and that modal nominalism is incompatible with mathematical naturalism.
Shapiro argues that modal nominalists invoke primitive modal concepts and that they are thereby unable to justify the various modal assertions that figure in their theories. The platonist, meanwhile, can appeal to the set-theoretic reduction of modality, and so can justify assertions about what is logically possible through an appeal to what exists in the set-theoretic hierarchy. In chapter one, I illustrate the modal involvement of the major modal nominalist views (Chihara\u27s Constructibility Theory, Field\u27s fictionalism, and Hellman\u27s Modal Structuralism). Chapter two provides an analysis of Shapiro\u27s criticism, and a partial response to it. A response is provided in full in chapter three, in which I argue that reducing modality does not provide a means for justifying modal assertions, vitiating the accusation that modal nominalists are particularly burdened by their inability to justify modal assertions.
Chapter four discusses Burgess\u27s naturalistic objection that nominalism is unscientific. I argue that Burgess\u27s naturalism is inadequately resourced to expose nominalism (modal or otherwise) as unscientific in a way that would compel a naturalist to reject nominalism. I also argue that Burgess\u27s favored moderate platonism is also guilty of being unscientific. Chapter five discusses some objections derived from Maddy\u27s naturalism, one according to which modal nominalism fails to affirm or support mathematical method, and a second according to which modal nominalism fails to be contained or accommodated by mathematical method. Though both objections serve as evidence that modal nominalism is incompatible with Maddy\u27s naturalism, I argue that Maddy\u27s naturalism is implausibly strong and that modal nominalism is compatible with forms of naturalism that relax the stronger of Maddy\u27s naturalistic principles
Minimizing Communication in Linear Algebra
In 1981 Hong and Kung proved a lower bound on the amount of communication
needed to perform dense, matrix-multiplication using the conventional
algorithm, where the input matrices were too large to fit in the small, fast
memory. In 2004 Irony, Toledo and Tiskin gave a new proof of this result and
extended it to the parallel case. In both cases the lower bound may be
expressed as (#arithmetic operations / ), where M is the size
of the fast memory (or local memory in the parallel case). Here we generalize
these results to a much wider variety of algorithms, including LU
factorization, Cholesky factorization, factorization, QR factorization,
algorithms for eigenvalues and singular values, i.e., essentially all direct
methods of linear algebra. The proof works for dense or sparse matrices, and
for sequential or parallel algorithms. In addition to lower bounds on the
amount of data moved (bandwidth) we get lower bounds on the number of messages
required to move it (latency). We illustrate how to extend our lower bound
technique to compositions of linear algebra operations (like computing powers
of a matrix), to decide whether it is enough to call a sequence of simpler
optimal algorithms (like matrix multiplication) to minimize communication, or
if we can do better. We give examples of both. We also show how to extend our
lower bounds to certain graph theoretic problems.
We point out recently designed algorithms for dense LU, Cholesky, QR,
eigenvalue and the SVD problems that attain these lower bounds; implementations
of LU and QR show large speedups over conventional linear algebra algorithms in
standard libraries like LAPACK and ScaLAPACK. Many open problems remain.Comment: 27 pages, 2 table
- …