97,793 research outputs found
One-Dimensional Population Density Approaches to Recurrently Coupled Networks of Neurons with Noise
Mean-field systems have been previously derived for networks of coupled,
two-dimensional, integrate-and-fire neurons such as the Izhikevich, adapting
exponential (AdEx) and quartic integrate and fire (QIF), among others.
Unfortunately, the mean-field systems have a degree of frequency error and the
networks analyzed often do not include noise when there is adaptation. Here, we
derive a one-dimensional partial differential equation (PDE) approximation for
the marginal voltage density under a first order moment closure for coupled
networks of integrate-and-fire neurons with white noise inputs. The PDE has
substantially less frequency error than the mean-field system, and provides a
great deal more information, at the cost of analytical tractability. The
convergence properties of the mean-field system in the low noise limit are
elucidated. A novel method for the analysis of the stability of the
asynchronous tonic firing solution is also presented and implemented. Unlike
previous attempts at stability analysis with these network types, information
about the marginal densities of the adaptation variables is used. This method
can in principle be applied to other systems with nonlinear partial
differential equations.Comment: 26 Pages, 6 Figure
An Asynchronous Parallel Approach to Sparse Recovery
Asynchronous parallel computing and sparse recovery are two areas that have
received recent interest. Asynchronous algorithms are often studied to solve
optimization problems where the cost function takes the form , with a common assumption that each is sparse; that is, each
acts only on a small number of components of . Sparse
recovery problems, such as compressed sensing, can be formulated as
optimization problems, however, the cost functions are dense with respect
to the components of , and instead the signal is assumed to be sparse,
meaning that it has only non-zeros where . Here we address how one
may use an asynchronous parallel architecture when the cost functions are
not sparse in , but rather the signal is sparse. We propose an
asynchronous parallel approach to sparse recovery via a stochastic greedy
algorithm, where multiple processors asynchronously update a vector in shared
memory containing information on the estimated signal support. We include
numerical simulations that illustrate the potential benefits of our proposed
asynchronous method.Comment: 5 pages, 2 figure
Maiter: An Asynchronous Graph Processing Framework for Delta-based Accumulative Iterative Computation
Myriad of graph-based algorithms in machine learning and data mining require
parsing relational data iteratively. These algorithms are implemented in a
large-scale distributed environment in order to scale to massive data sets. To
accelerate these large-scale graph-based iterative computations, we propose
delta-based accumulative iterative computation (DAIC). Different from
traditional iterative computations, which iteratively update the result based
on the result from the previous iteration, DAIC updates the result by
accumulating the "changes" between iterations. By DAIC, we can process only the
"changes" to avoid the negligible updates. Furthermore, we can perform DAIC
asynchronously to bypass the high-cost synchronous barriers in heterogeneous
distributed environments. Based on the DAIC model, we design and implement an
asynchronous graph processing framework, Maiter. We evaluate Maiter on local
cluster as well as on Amazon EC2 Cloud. The results show that Maiter achieves
as much as 60x speedup over Hadoop and outperforms other state-of-the-art
frameworks.Comment: ScienceCloud 2012, TKDE 201
Asynchronous Parallel Stochastic Gradient Descent - A Numeric Core for Scalable Distributed Machine Learning Algorithms
The implementation of a vast majority of machine learning (ML) algorithms
boils down to solving a numerical optimization problem. In this context,
Stochastic Gradient Descent (SGD) methods have long proven to provide good
results, both in terms of convergence and accuracy. Recently, several
parallelization approaches have been proposed in order to scale SGD to solve
very large ML problems. At their core, most of these approaches are following a
map-reduce scheme. This paper presents a novel parallel updating algorithm for
SGD, which utilizes the asynchronous single-sided communication paradigm.
Compared to existing methods, Asynchronous Parallel Stochastic Gradient Descent
(ASGD) provides faster (or at least equal) convergence, close to linear scaling
and stable accuracy
- …