476 research outputs found
Performance Optimization for Federated Person Re-identification via Benchmark Analysis
Federated learning is a privacy-preserving machine learning technique that
learns a shared model across decentralized clients. It can alleviate privacy
concerns of personal re-identification, an important computer vision task. In
this work, we implement federated learning to person re-identification
(FedReID) and optimize its performance affected by statistical heterogeneity in
the real-world scenario. We first construct a new benchmark to investigate the
performance of FedReID. This benchmark consists of (1) nine datasets with
different volumes sourced from different domains to simulate the heterogeneous
situation in reality, (2) two federated scenarios, and (3) an enhanced
federated algorithm for FedReID. The benchmark analysis shows that the
client-edge-cloud architecture, represented by the federated-by-dataset
scenario, has better performance than client-server architecture in FedReID. It
also reveals the bottlenecks of FedReID under the real-world scenario,
including poor performance of large datasets caused by unbalanced weights in
model aggregation and challenges in convergence. Then we propose two
optimization methods: (1) To address the unbalanced weight problem, we propose
a new method to dynamically change the weights according to the scale of model
changes in clients in each training round; (2) To facilitate convergence, we
adopt knowledge distillation to refine the server model with knowledge
generated from client models on a public dataset. Experiment results
demonstrate that our strategies can achieve much better convergence with
superior performance on all datasets. We believe that our work will inspire the
community to further explore the implementation of federated learning on more
computer vision tasks in real-world scenarios.Comment: ACMMM'2
Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking
This paper proposes a new neural architecture for collaborative ranking with
implicit feedback. Our model, LRML (\textit{Latent Relational Metric Learning})
is a novel metric learning approach for recommendation. More specifically,
instead of simple push-pull mechanisms between user and item pairs, we propose
to learn latent relations that describe each user item interaction. This helps
to alleviate the potential geometric inflexibility of existing metric learing
approaches. This enables not only better performance but also a greater extent
of modeling capability, allowing our model to scale to a larger number of
interactions. In order to do so, we employ a augmented memory module and learn
to attend over these memory blocks to construct latent relations. The
memory-based attention module is controlled by the user-item interaction,
making the learned relation vector specific to each user-item pair. Hence, this
can be interpreted as learning an exclusive and optimal relational translation
for each user-item interaction. The proposed architecture demonstrates the
state-of-the-art performance across multiple recommendation benchmarks. LRML
outperforms other metric learning models by in terms of Hits@10 and
nDCG@10 on large datasets such as Netflix and MovieLens20M. Moreover,
qualitative studies also demonstrate evidence that our proposed model is able
to infer and encode explicit sentiment, temporal and attribute information
despite being only trained on implicit feedback. As such, this ascertains the
ability of LRML to uncover hidden relational structure within implicit
datasets.Comment: WWW 201
Navigating the Audit Landscape : A Framework for Developing Transparent and Auditable XR
Funding Information: The Compliant & Accountable Systems Group acknowledges the financial support of UK Research & Innovation (grants EP/P024394/1, EP/R033501/1, ES/T006315/1), The Alan Turing Institute, and Microsoft, through the Microsoft Cloud Computing Research Centre
- …