246 research outputs found
SAFA : a semi-asynchronous protocol for fast federated learning with low overhead
Federated learning (FL) has attracted increasing attention as a promising approach to driving a vast number of end devices with artificial intelligence. However, it is very challenging to guarantee the efficiency of FL considering the unreliable nature of end devices while the cost of device-server communication cannot be neglected. In this paper, we propose SAFA, a semi-asynchronous FL protocol, to address the problems in federated learning such as low round efficiency and poor convergence rate in extreme conditions (e.g., clients dropping offline frequently). We introduce novel designs in the steps of model distribution, client selection and global aggregation to mitigate the impacts of stragglers, crashes and model staleness in order to boost efficiency and improve the quality of the global model. We have conducted extensive experiments with typical machine learning tasks. The results demonstrate that the proposed protocol is effective in terms of shortening federated round duration, reducing local resource wastage, and improving the accuracy of the global model at an acceptable communication cost
Towards a Benchmark for Fog Data Processing
Fog data processing systems provide key abstractions to manage data and event
processing in the geo-distributed and heterogeneous fog environment. The lack
of standardized benchmarks for such systems, however, hinders their development
and deployment, as different approaches cannot be compared quantitatively.
Existing cloud data benchmarks are inadequate for fog computing, as their focus
on workload specification ignores the tight integration of application and
infrastructure inherent in fog computing.
In this paper, we outline an approach to a fog-native data processing
benchmark that combines workload specifications with infrastructure
specifications. This holistic approach allows researchers and engineers to
quantify how a software approach performs for a given workload on given
infrastructure. Further, by basing our benchmark in a realistic IoT sensor
network scenario, we can combine paradigms such as low-latency event
processing, machine learning inference, and offline data analytics, and analyze
the performance impact of their interplay in a fog data processing system
- …