4,702 research outputs found
GraphLab: A New Framework for Parallel Machine Learning
Designing and implementing efficient, provably correct parallel machine
learning (ML) algorithms is challenging. Existing high-level parallel
abstractions like MapReduce are insufficiently expressive while low-level tools
like MPI and Pthreads leave ML experts repeatedly solving the same design
challenges. By targeting common patterns in ML, we developed GraphLab, which
improves upon abstractions like MapReduce by compactly expressing asynchronous
iterative algorithms with sparse computational dependencies while ensuring data
consistency and achieving a high degree of parallel performance. We demonstrate
the expressiveness of the GraphLab framework by designing and implementing
parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and
Compressed Sensing. We show that using GraphLab we can achieve excellent
parallel performance on large scale real-world problems
Tin-selenium compounds at ambient and high pressures
SnxSey crystalline compounds consisting of Sn and Se atoms of varying
composition are systematically investigated at pressures from 0 to 100 GPa
using the first-principles evolutionary crystal structure search method based
on density functional theory (DFT). All known experimental phases of SnSe and
SnSe2 are found without any prior input. A second order polymorphic phase
transition from SnSe-Pnma phase to SnSe-Cmcm phase is predicted at 2.5 GPa.
Initially being semiconducting, this phase becomes metallic at 7.3 GPa. Upon
further increase of pressure up to 36.6 GPa, SnSe-Cmcm phase is transformed to
CsCl-type SnSe-Pm3m phase, which remains stable at even higher pressures. A
metallic compound with different stoichiometry, Sn3Se4-I43d, is found to be
thermodynamically stable from 18 GPa to 70 GPa. Known semiconductor tin
diselenide SnSe2-P3m1 phase is found to be thermodynamically stable from
ambient pressure up to 18 GPa. Initially being semiconducting, it experiences
metalization at pressures above 8 GPa
Distributed GraphLab: A Framework for Machine Learning in the Cloud
While high-level data parallel frameworks, like MapReduce, simplify the
design and implementation of large-scale data processing systems, they do not
naturally or efficiently support many important data mining and machine
learning algorithms and can lead to inefficient learning systems. To help fill
this critical void, we introduced the GraphLab abstraction which naturally
expresses asynchronous, dynamic, graph-parallel computation while ensuring data
consistency and achieving a high degree of parallel performance in the
shared-memory setting. In this paper, we extend the GraphLab framework to the
substantially more challenging distributed setting while preserving strong data
consistency guarantees. We develop graph based extensions to pipelined locking
and data versioning to reduce network congestion and mitigate the effect of
network latency. We also introduce fault tolerance to the GraphLab abstraction
using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can
be easily implemented by exploiting the GraphLab abstraction itself. Finally,
we evaluate our distributed implementation of the GraphLab abstraction on a
large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains
over Hadoop-based implementations.Comment: VLDB201
Peaks in the Cosmic Microwave Background: flat versus open models
We present properties of the peaks (maxima) of the CMB anisotropies expected
in flat and open CDM models. We obtain analytical expressions of several
topological descriptors: mean number of maxima and the probability distribution
of the gaussian curvature and the eccentricity of the peaks. These quantities
are calculated as functions of the radiation power spectrum, assuming a
gaussian distribution of temperature anisotropies. We present results for
angular resolutions ranging from 5' to 20' (antenna FWHM), scales that are
relevant for the MAP and COBRAS/SAMBA space missions and the ground-based
interferometer experiments. Our analysis also includes the effects of noise. We
find that the number of peaks can discriminate between standard CDM models, and
that the gaussian curvature distribution provides a useful test for these
various models, whereas the eccentricity distribution can not distinguish
between them.Comment: 13 pages latex file using aasms4.sty + 3 tables + 2 postscript
figures, to appear in ApJ (March 1997
Multiscale analysis of morphology and mechanics in tail tendon from the ZDSD rat model of type 2 diabetes
Type 2 diabetes (T2D) impacts multiple organ systems including the circulatory, renal, nervous and musculoskeletal systems. In collagen-based tissues, one mechanism that may be responsible for detrimental mechanical impacts of T2D is the formation of advanced glycation end products (AGEs) leading to increased collagen stiffness and decreased toughness, resulting in brittle tissue behavior. The purpose of this study was to investigate tendon mechanical properties from normal and diabetic rats at two distinct length scales, testing the hypothesis that increased stiffness and strength and decreased toughness at the fiber level would be associated with alterations in nanoscale morphology and mechanics. Individual fascicles from female Zucker diabetic Sprague-Dawley (ZDSD) rats had no differences in fascicle-level mechanical properties but had increased material-level strength and stiffness versus control rats (CD). At the nanoscale, collagen fibril D-spacing was shifted towards higher spacing values in diabetic ZDSD fibrils. The distribution of nanoscale modulus values was also shifted to higher values. Material-level strength and stiffness from whole fiber tests were increased in ZDSD tails. Correlations between nanoscale and microscale properties indicate a direct positive relationship between the two length scales, most notably in the relationship between nanoscale and microscale modulus. These findings indicate that diabetes-induced changes in material strength and modulus were driven by alterations at the nanoscale
- âŠ