308 research outputs found
CSWA: Aggregation-Free Spatial-Temporal Community Sensing
In this paper, we present a novel community sensing paradigm -- {C}ommunity
{S}ensing {W}ithout {A}ggregation}. CSWA is designed to obtain the environment
information (e.g., air pollution or temperature) in each subarea of the target
area, without aggregating sensor and location data collected by community
members. CSWA operates on top of a secured peer-to-peer network over the
community members and proposes a novel \emph{Decentralized Spatial-Temporal
Compressive Sensing} framework based on \emph{Parallelized Stochastic Gradient
Descent}. Through learning the \emph{low-rank structure} via distributed
optimization, CSWA approximates the value of the sensor data in each subarea
(both covered and uncovered) for each sensing cycle using the sensor data
locally stored in each member's mobile device. Simulation experiments based on
real-world datasets demonstrate that CSWA exhibits low approximation error
(i.e., less than C in city-wide temperature sensing task and
units of PM2.5 index in urban air pollution sensing) and performs comparably to
(sometimes better than) state-of-the-art algorithms based on the data
aggregation and centralized computation.Comment: This paper has been accepted by AAAI 2018. First two authors are
equally contribute
EdgeSense: Edge-Mediated Spatial-Temporal Crowdsensing
Edge computing recently is increasingly popular due to the growth of data size and the need of sensing with the reduced center. Based on Edge computing architecture, we propose a novel crowdsensing framework called Edge-Mediated Spatial-Temporal Crowdsensing. This algorithm targets on receiving the environment information such as air pollution, temperature, and traffic flow in some parts of the goal area, and does not aggregate sensor data with its location information. Specifically, EdgeSense works on top of a secured peer-To-peer network consisted of participants and propose a novel Decentralized Spatial-Temporal Crowdsensing framework based on Parallelized Stochastic Gradient Descent. To approximate the sensing data in each part of the target area in each sensing cycle, EdgeSense uses the local sensor data in participants\u27 mobile devices to learn the low-rank characteristic and then recovers the sensing data from it. We evaluate the EdgeSense on the real-world data sets (temperature [1] and PM2.5 [2] data sets), where our algorithm can achieve low error in approximation and also can compete with the baseline algorithm which is designed using centralized and aggregated mechanism
Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization
An efficient algorithm for recurrent neural network training is presented.
The approach increases the training speed for tasks where a length of the input
sequence may vary significantly. The proposed approach is based on the optimal
batch bucketing by input sequence length and data parallelization on multiple
graphical processing units. The baseline training performance without sequence
bucketing is compared with the proposed solution for a different number of
buckets. An example is given for the online handwriting recognition task using
an LSTM recurrent neural network. The evaluation is performed in terms of the
wall clock time, number of epochs, and validation loss value.Comment: 4 pages, 5 figures, Comments, 2016 IEEE First International
Conference on Data Stream Mining & Processing (DSMP), Lviv, 201
Balancing the Communication Load of Asynchronously Parallelized Machine Learning Algorithms
Stochastic Gradient Descent (SGD) is the standard numerical method used to
solve the core optimization problem for the vast majority of machine learning
(ML) algorithms. In the context of large scale learning, as utilized by many
Big Data applications, efficient parallelization of SGD is in the focus of
active research. Recently, we were able to show that the asynchronous
communication paradigm can be applied to achieve a fast and scalable
parallelization of SGD. Asynchronous Stochastic Gradient Descent (ASGD)
outperforms other, mostly MapReduce based, parallel algorithms solving large
scale machine learning problems. In this paper, we investigate the impact of
asynchronous communication frequency and message size on the performance of
ASGD applied to large scale ML on HTC cluster and cloud environments. We
introduce a novel algorithm for the automatic balancing of the asynchronous
communication load, which allows to adapt ASGD to changing network bandwidths
and latencies.Comment: arXiv admin note: substantial text overlap with arXiv:1505.0495
- …