Federated learning (FL) has emerged as a privacy solution for collaborative
distributed learning where clients train AI models directly on their devices
instead of sharing their data with a centralized (potentially adversarial)
server. Although FL preserves local data privacy to some extent, it has been
shown that information about clients' data can still be inferred from model
updates. In recent years, various privacy-preserving schemes have been
developed to address this privacy leakage. However, they often provide privacy
at the expense of model performance or system efficiency and balancing these
tradeoffs is a crucial challenge when implementing FL schemes. In this
manuscript, we propose a Privacy-Preserving Federated Learning (PPFL) framework
built on the synergy of matrix encryption and system immersion tools from
control theory. The idea is to immerse the learning algorithm, a Stochastic
Gradient Decent (SGD), into a higher-dimensional system (the so-called target
system) and design the dynamics of the target system so that: the trajectories
of the original SGD are immersed/embedded in its trajectories, and it learns on
encrypted data (here we use random matrix encryption). Matrix encryption is
reformulated at the server as a random change of coordinates that maps original
parameters to a higher-dimensional parameter space and enforces that the target
SGD converges to an encrypted version of the original SGD optimal solution. The
server decrypts the aggregated model using the left inverse of the immersion
map. We show that our algorithm provides the same level of accuracy and
convergence rate as the standard FL with a negligible computation cost while
revealing no information about the clients' data