130 research outputs found
Securing Cyber-Physical Social Interactions on Wrist-worn Devices
Since ancient Greece, handshaking has been commonly practiced between two people as a friendly gesture to express trust and respect, or form a mutual agreement. In this article, we show that such physical contact can be used to bootstrap secure cyber contact between the smart devices worn by users. The key observation is that during handshaking, although belonged to two different users, the two hands involved in the shaking events are often rigidly connected, and therefore exhibit very similar motion patterns. We propose a novel key generation system, which harvests motion data during user handshaking from the wrist-worn smart devices such as smartwatches or fitness bands, and exploits the matching motion patterns to generate symmetric keys on both parties. The generated keys can be then used to establish a secure communication channel for exchanging data between devices. This provides a much more natural and user-friendly alternative for many applications, e.g., exchanging/sharing contact details, friending on social networks, or even making payments, since it doesn’t involve extra bespoke hardware, nor require the users to perform pre-defined gestures. We implement the proposed key generation system on off-the-shelf smartwatches, and extensive evaluation shows that it can reliably generate 128-bit symmetric keys just after around 1s of handshaking (with success rate >99%), and is resilient to different types of attacks including impersonate mimicking attacks, impersonate passive attacks, or eavesdropping attacks. Specifically, for real-time impersonate mimicking attacks, in our experiments, the Equal Error Rate (EER) is only 1.6% on average. We also show that the proposed key generation system can be extremely lightweight and is able to run in-situ on the resource-constrained smartwatches without incurring excessive resource consumption
Continuous Input Embedding Size Search For Recommender Systems
Latent factor models are the most popular backbones for today's recommender
systems owing to their prominent performance. Latent factor models represent
users and items as real-valued embedding vectors for pairwise similarity
computation, and all embeddings are traditionally restricted to a uniform size
that is relatively large (e.g., 256-dimensional). With the exponentially
expanding user base and item catalog in contemporary e-commerce, this design is
admittedly becoming memory-inefficient. To facilitate lightweight
recommendation, reinforcement learning (RL) has recently opened up
opportunities for identifying varying embedding sizes for different
users/items. However, challenged by search efficiency and learning an optimal
RL policy, existing RL-based methods are restricted to highly discrete,
predefined embedding size choices. This leads to a largely overlooked potential
of introducing finer granularity into embedding sizes to obtain better
recommendation effectiveness under a given memory budget. In this paper, we
propose continuous input embedding size search (CIESS), a novel RL-based method
that operates on a continuous search space with arbitrary embedding sizes to
choose from. In CIESS, we further present an innovative random walk-based
exploration strategy to allow the RL policy to efficiently explore more
candidate embedding sizes and converge to a better decision. CIESS is also
model-agnostic and hence generalizable to a variety of latent factor RSs,
whilst experiments on two real-world datasets have shown state-of-the-art
performance of CIESS under different memory budgets when paired with three
popular recommendation models.Comment: To appear in SIGIR'2
HeteFedRec: Federated Recommender Systems with Model Heterogeneity
Owing to the nature of privacy protection, federated recommender systems
(FedRecs) have garnered increasing interest in the realm of on-device
recommender systems. However, most existing FedRecs only allow participating
clients to collaboratively train a recommendation model of the same public
parameter size. Training a model of the same size for all clients can lead to
suboptimal performance since clients possess varying resources. For example,
clients with limited training data may prefer to train a smaller recommendation
model to avoid excessive data consumption, while clients with sufficient data
would benefit from a larger model to achieve higher recommendation accuracy. To
address the above challenge, this paper introduces HeteFedRec, a novel FedRec
framework that enables the assignment of personalized model sizes to
participants. In HeteFedRec, we present a heterogeneous recommendation model
aggregation strategy, including a unified dual-task learning mechanism and a
dimensional decorrelation regularization, to allow knowledge aggregation among
recommender models of different sizes. Additionally, a relation-based ensemble
knowledge distillation method is proposed to effectively distil knowledge from
heterogeneous item embeddings. Extensive experiments conducted on three
real-world recommendation datasets demonstrate the effectiveness and efficiency
of HeteFedRec in training federated recommender systems under heterogeneous
settings
Lightweight Embeddings for Graph Collaborative Filtering
Graph neural networks (GNNs) are currently one of the most performant
collaborative filtering methods. Meanwhile, owing to the use of an embedding
table to represent each user/item as a distinct vector, GNN-based recommenders
have inherited the long-standing defect of parameter inefficiency. As a common
practice for scalable embeddings, parameter sharing enables the use of fewer
embedding vectors (i.e., meta-embeddings). When assigning meta-embeddings, most
existing methods are a heuristically designed, predefined mapping from each
user's/item's ID to the corresponding meta-embedding indexes, thus simplifying
the optimization problem into learning only the meta-embeddings. However, in
the context of GNN-based collaborative filtering, such a fixed mapping omits
the semantic correlations between entities that are evident in the user-item
interaction graph, leading to suboptimal recommendation performance. To this
end, we propose Lightweight Embeddings for Graph Collaborative Filtering
(LEGCF), a parameter-efficient embedding framework dedicated to GNN-based
recommenders. LEGCF innovatively introduces an assignment matrix as an extra
learnable component on top of meta-embeddings. To jointly optimize these two
heavily entangled components, aside from learning the meta-embeddings by
minimizing the recommendation loss, LEGCF further performs efficient assignment
update by enforcing a novel semantic similarity constraint and finding its
closed-form solution based on matrix pseudo-inverse. The meta-embeddings and
assignment matrix are alternately updated, where the latter is sparsified on
the fly to ensure negligible storage overhead. Extensive experiments on three
benchmark datasets have verified LEGCF's smallest trade-off between size and
performance, with consistent accuracy gain over state-of-the-art baselines. The
codebase of LEGCF is available in https://github.com/xurong-liang/LEGCF.Comment: Accepted by SIGIR '2
Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation
Session-based recommendation (SBR) focuses on next-item prediction at a
certain time point. As user profiles are generally not available in this
scenario, capturing the user intent lying in the item transitions plays a
pivotal role. Recent graph neural networks (GNNs) based SBR methods regard the
item transitions as pairwise relations, which neglect the complex high-order
information among items. Hypergraph provides a natural way to capture
beyond-pairwise relations, while its potential for SBR has remained unexplored.
In this paper, we fill this gap by modeling session-based data as a hypergraph
and then propose a hypergraph convolutional network to improve SBR. Moreover,
to enhance hypergraph modeling, we devise another graph convolutional network
which is based on the line graph of the hypergraph and then integrate
self-supervised learning into the training of the networks by maximizing mutual
information between the session representations learned via the two networks,
serving as an auxiliary task to improve the recommendation task. Since the two
types of networks both are based on hypergraph, which can be seen as two
channels for hypergraph modeling, we name our model \textbf{DHCN} (Dual Channel
Hypergraph Convolutional Networks). Extensive experiments on three benchmark
datasets demonstrate the superiority of our model over the SOTA methods, and
the results validate the effectiveness of hypergraph modeling and
self-supervised task. The implementation of our model is available at
https://github.com/xiaxin1998/DHCNComment: 9 pages, 4 figures, accepted by AAAI'2
- …