Recent years have witnessed the rapid growth of federated learning (FL), an
emerging privacy-aware machine learning paradigm that allows collaborative
learning over isolated datasets distributed across multiple participants. The
salient feature of FL is that the participants can keep their private datasets
local and only share model updates. Very recently, some research efforts have
been initiated to explore the applicability of FL for matrix factorization
(MF), a prevalent method used in modern recommendation systems and services. It
has been shown that sharing the gradient updates in federated MF entails
privacy risks on revealing users' personal ratings, posing a demand for
protecting the shared gradients. Prior art is limited in that they incur
notable accuracy loss, or rely on heavy cryptosystem, with a weak threat model
assumed. In this paper, we propose VPFedMF, a new design aimed at
privacy-preserving and verifiable federated MF. VPFedMF provides for federated
MF guarantees on the confidentiality of individual gradient updates through
lightweight and secure aggregation. Moreover, VPFedMF ambitiously and newly
supports correctness verification of the aggregation results produced by the
coordinating server in federated MF. Experiments on a real-world moving rating
dataset demonstrate the practical performance of VPFedMF in terms of
computation, communication, and accuracy