24 research outputs found
On Money as a Means of Coordination between Network Packets
In this work, we apply a common economic tool, namely money, to coordinate
network packets. In particular, we present a network economy, called
PacketEconomy, where each flow is modeled as a population of rational network
packets, and these packets can self-regulate their access to network resources
by mutually trading their positions in router queues. Every packet of the
economy has its price, and this price determines if and when the packet will
agree to buy or sell a better position. We consider a corresponding Markov
model of trade and show that there are Nash equilibria (NE) where queue
positions and money are exchanged directly between the network packets. This
simple approach, interestingly, delivers improvements even when fiat money is
used. We present theoretical arguments and experimental results to support our
claims
Federated Learning for 5G Base Station Traffic Forecasting
Mobile traffic prediction is of great importance on the path of enabling 5G
mobile networks to perform smart and efficient infrastructure planning and
management. However, available data are limited to base station logging
information. Hence, training methods for generating high-quality predictions
that can generalize to new observations on different parties are in demand.
Traditional approaches require collecting measurements from different base
stations and sending them to a central entity, followed by performing machine
learning operations using the received data. The dissemination of local
observations raises privacy, confidentiality, and performance concerns,
hindering the applicability of machine learning techniques. Various distributed
learning methods have been proposed to address this issue, but their
application to traffic prediction has yet to be explored. In this work, we
study the effectiveness of federated learning applied to raw base station
aggregated LTE data for time-series forecasting. We evaluate one-step
predictions using 5 different neural network architectures trained with a
federated setting on non-iid data. The presented algorithms have been submitted
to the Global Federated Traffic Prediction for 5G and Beyond Challenge. Our
results show that the learning architectures adapted to the federated setting
achieve equivalent prediction error to the centralized setting, pre-processing
techniques on base stations lead to higher forecasting accuracy, while
state-of-the-art aggregators do not outperform simple approaches
Federated Learning for Early Dropout Prediction on Healthy Ageing Applications
The provision of social care applications is crucial for elderly people to
improve their quality of life and enables operators to provide early
interventions. Accurate predictions of user dropouts in healthy ageing
applications are essential since they are directly related to individual health
statuses. Machine Learning (ML) algorithms have enabled highly accurate
predictions, outperforming traditional statistical methods that struggle to
cope with individual patterns. However, ML requires a substantial amount of
data for training, which is challenging due to the presence of personal
identifiable information (PII) and the fragmentation posed by regulations. In
this paper, we present a federated machine learning (FML) approach that
minimizes privacy concerns and enables distributed training, without
transferring individual data. We employ collaborative training by considering
individuals and organizations under FML, which models both cross-device and
cross-silo learning scenarios. Our approach is evaluated on a real-world
dataset with non-independent and identically distributed (non-iid) data among
clients, class imbalance and label ambiguity. Our results show that data
selection and class imbalance handling techniques significantly improve the
predictive accuracy of models trained under FML, demonstrating comparable or
superior predictive performance than traditional ML models
Intelligent Client Selection for Federated Learning using Cellular Automata
Federated Learning (FL) has emerged as a promising solution for
privacy-enhancement and latency minimization in various real-world
applications, such as transportation, communications, and healthcare. FL
endeavors to bring Machine Learning (ML) down to the edge by harnessing data
from million of devices and IoT sensors, thus enabling rapid responses to
dynamic environments and yielding highly personalized results. However, the
increased amount of sensors across diverse applications poses challenges in
terms of communication and resource allocation, hindering the participation of
all devices in the federated process and prompting the need for effective FL
client selection. To address this issue, we propose Cellular Automaton-based
Client Selection (CA-CS), a novel client selection algorithm, which leverages
Cellular Automata (CA) as models to effectively capture spatio-temporal changes
in a fast-evolving environment. CA-CS considers the computational resources and
communication capacity of each participating client, while also accounting for
inter-client interactions between neighbors during the client selection
process, enabling intelligent client selection for online FL processes on data
streams that closely resemble real-world scenarios. In this paper, we present a
thorough evaluation of the proposed CA-CS algorithm using MNIST and CIFAR-10
datasets, while making a direct comparison against a uniformly random client
selection scheme. Our results demonstrate that CA-CS achieves comparable
accuracy to the random selection approach, while effectively avoiding
high-latency clients.Comment: 18th IEEE International Workshop on Cellular Nanoscale Networks and
their Application
Towards Energy-Aware Federated Traffic Prediction for Cellular Networks
Cellular traffic prediction is a crucial activity for optimizing networks in
fifth-generation (5G) networks and beyond, as accurate forecasting is essential
for intelligent network design, resource allocation and anomaly mitigation.
Although machine learning (ML) is a promising approach to effectively predict
network traffic, the centralization of massive data in a single data center
raises issues regarding confidentiality, privacy and data transfer demands. To
address these challenges, federated learning (FL) emerges as an appealing ML
training framework which offers high accurate predictions through parallel
distributed computations. However, the environmental impact of these methods is
often overlooked, which calls into question their sustainability. In this
paper, we address the trade-off between accuracy and energy consumption in FL
by proposing a novel sustainability indicator that allows assessing the
feasibility of ML models. Then, we comprehensively evaluate state-of-the-art
deep learning (DL) architectures in a federated scenario using real-world
measurements from base station (BS) sites in the area of Barcelona, Spain. Our
findings indicate that larger ML models achieve marginally improved performance
but have a significant environmental impact in terms of carbon footprint, which
make them impractical for real-world applications.Comment: International Symposium on Federated Learning Technologies and
Applications (FLTA), 202
Approximation schemes for scheduling and covering on unrelated machines
AbstractWe examine the problem of assigning n independent jobs to m unrelated parallel machines, so that each job is processed without interruption on one of the machines, and at any time, every machine processes at most one job. We focus on the case where m is a fixed constant, and present a new rounding approach that yields approximation schemes for multi-objective minimum makespan scheduling with a fixed number of linear cost constraints. The same approach gives approximation schemes for covering problems like maximizing the minimum load on any machine, and for assigning specific or equal loads to the machines