48 research outputs found
Accelerated Event-by-Event Neutrino Oscillation Reweighting with Matter Effects on a GPU
Oscillation probability calculations are becoming increasingly CPU intensive
in modern neutrino oscillation analyses. The independency of reweighting
individual events in a Monte Carlo sample lends itself to parallel
implementation on a Graphics Processing Unit. The library "Prob3++" was ported
to the GPU using the CUDA C API, allowing for large scale parallelized
calculations of neutrino oscillation probabilities through matter of constant
density, decreasing the execution time by a factor of 75, when compared to
performance on a single CPU.Comment: Final Update: Post submission update Updated version: quantified the
difference in event rates for binned and event-by-event reweighting with a
typical binning scheme. Improved formatting of reference
Accelerated large-scale multiple sequence alignment
<p>Abstract</p> <p>Background</p> <p>Multiple sequence alignment (MSA) is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware.</p> <p>Results</p> <p>We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor.</p> <p>Conclusions</p> <p>Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from <url>http://dna.cs.byu.edu/msa/</url>.</p
Distributed Block Coordinate Descent for Minimizing Partially Separable Functions
In this work we propose a distributed randomized block coordinate descent
method for minimizing a convex function with a huge number of
variables/coordinates. We analyze its complexity under the assumption that the
smooth part of the objective function is partially block separable, and show
that the degree of separability directly influences the complexity. This
extends the results in [Richtarik, Takac: Parallel coordinate descent methods
for big data optimization] to a distributed environment. We first show that
partially block separable functions admit an expected separable
overapproximation (ESO) with respect to a distributed sampling, compute the ESO
parameters, and then specialize complexity results from recent literature that
hold under the generic ESO assumption. We describe several approaches to
distribution and synchronization of the computation across a cluster of
multi-core computers and provide promising computational results.Comment: in Recent Developments in Numerical Analysis and Optimization, 201
