49,412 research outputs found
MapReduce is Good Enough? If All You Have is a Hammer, Throw Away Everything That's Not a Nail!
Hadoop is currently the large-scale data analysis "hammer" of choice, but
there exist classes of algorithms that aren't "nails", in the sense that they
are not particularly amenable to the MapReduce programming model. To address
this, researchers have proposed MapReduce extensions or alternative programming
models in which these algorithms can be elegantly expressed. This essay
espouses a very different position: that MapReduce is "good enough", and that
instead of trying to invent screwdrivers, we should simply get rid of
everything that's not a nail. To be more specific, much discussion in the
literature surrounds the fact that iterative algorithms are a poor fit for
MapReduce: the simple solution is to find alternative non-iterative algorithms
that solve the same problem. This essay captures my personal experiences as an
academic researcher as well as a software engineer in a "real-world" production
analytics environment. From this combined perspective I reflect on the current
state and future of "big data" research
A view of Estimation of Distribution Algorithms through the lens of Expectation-Maximization
We show that a large class of Estimation of Distribution Algorithms,
including, but not limited to, Covariance Matrix Adaption, can be written as a
Monte Carlo Expectation-Maximization algorithm, and as exact EM in the limit of
infinite samples. Because EM sits on a rigorous statistical foundation and has
been thoroughly analyzed, this connection provides a new coherent framework
with which to reason about EDAs
Scalable Data Augmentation for Deep Learning
Scalable Data Augmentation (SDA) provides a framework for training deep
learning models using auxiliary hidden layers. Scalable MCMC is available for
network training and inference. SDA provides a number of computational
advantages over traditional algorithms, such as avoiding backtracking, local
modes and can perform optimization with stochastic gradient descent (SGD) in
TensorFlow. Standard deep neural networks with logit, ReLU and SVM activation
functions are straightforward to implement. To illustrate our architectures and
methodology, we use P\'{o}lya-Gamma logit data augmentation for a number of
standard datasets. Finally, we conclude with directions for future research
- …