43 research outputs found
Trustworthy Recommender Systems
Recommender systems (RSs) aim to help users to effectively retrieve items of
their interests from a large catalogue. For a quite long period of time,
researchers and practitioners have been focusing on developing accurate RSs.
Recent years have witnessed an increasing number of threats to RSs, coming from
attacks, system and user generated noise, system bias. As a result, it has
become clear that a strict focus on RS accuracy is limited and the research
must consider other important factors, e.g., trustworthiness. For end users, a
trustworthy RS (TRS) should not only be accurate, but also transparent,
unbiased and fair as well as robust to noise or attacks. These observations
actually led to a paradigm shift of the research on RSs: from accuracy-oriented
RSs to TRSs. However, researchers lack a systematic overview and discussion of
the literature in this novel and fast developing field of TRSs. To this end, in
this paper, we provide an overview of TRSs, including a discussion of the
motivation and basic concepts of TRSs, a presentation of the challenges in
building TRSs, and a perspective on the future directions in this area. We also
provide a novel conceptual framework to support the construction of TRSs
Learning Informative Representation for Fairness-aware Multivariate Time-series Forecasting: A Group-based Perspective
Performance unfairness among variables widely exists in multivariate time
series (MTS) forecasting models since such models may attend/bias to certain
(advantaged) variables. Addressing this unfairness problem is important for
equally attending to all variables and avoiding vulnerable model biases/risks.
However, fair MTS forecasting is challenging and has been less studied in the
literature. To bridge such significant gap, we formulate the fairness modeling
problem as learning informative representations attending to both advantaged
and disadvantaged variables. Accordingly, we propose a novel framework, named
FairFor, for fairness-aware MTS forecasting. FairFor is based on adversarial
learning to generate both group-independent and group-relevant representations
for the downstream forecasting. The framework first leverages a spectral
relaxation of the K-means objective to infer variable correlations and thus to
group variables. Then, it utilizes a filtering&fusion component to filter the
group-relevant information and generate group-independent representations via
orthogonality regularization. The group-independent and group-relevant
representations form highly informative representations, facilitating to
sharing knowledge from advantaged variables to disadvantaged variables to
guarantee fairness. Extensive experiments on four public datasets demonstrate
the effectiveness of our proposed FairFor for fair forecasting and significant
performance improvement.Comment: 13 pages, 5 figures, accepted by IEEE Transactions on Knowledge and
Data Engineering (TKDE
Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization
Zeroing neural network (ZNN) is a powerful tool to address the mathematical and optimization problems broadly arisen in the science and engineering areas. The convergence and robustness are always co-pursued in ZNN. However, there exists no related work on the ZNN for time-dependent nonlinear minimization that achieves simultaneously limited-time convergence and inherently noise suppression. In this article, for the purpose of satisfying such two requirements, a limited-time robust neural network (LTRNN) is devised and presented to solve time-dependent nonlinear minimization under various external disturbances. Different from the previous ZNN model for this problem either with limited-time convergence or with noise suppression, the proposed LTRNN model simultaneously possesses such two characteristics. Besides, rigorous theoretical analyses are given to prove the superior performance of the LTRNN model when adopted to solve time-dependent nonlinear minimization under external disturbances. Comparative results also substantiate the effectiveness and advantages of LTRNN via solving a time-dependent nonlinear minimization problem
Frequency-domain MLPs are More Effective Learners in Time Series Forecasting
Time series forecasting has played the key role in different industrial,
including finance, traffic, energy, and healthcare domains. While existing
literatures have designed many sophisticated architectures based on RNNs, GNNs,
or Transformers, another kind of approaches based on multi-layer perceptrons
(MLPs) are proposed with simple structure, low complexity, and {superior
performance}. However, most MLP-based forecasting methods suffer from the
point-wise mappings and information bottleneck, which largely hinders the
forecasting performance. To overcome this problem, we explore a novel direction
of applying MLPs in the frequency domain for time series forecasting. We
investigate the learned patterns of frequency-domain MLPs and discover their
two inherent characteristic benefiting forecasting, (i) global view: frequency
spectrum makes MLPs own a complete view for signals and learn global
dependencies more easily, and (ii) energy compaction: frequency-domain MLPs
concentrate on smaller key part of frequency components with compact signal
energy. Then, we propose FreTS, a simple yet effective architecture built upon
Frequency-domain MLPs for Time Series forecasting. FreTS mainly involves two
stages, (i) Domain Conversion, that transforms time-domain signals into complex
numbers of frequency domain; (ii) Frequency Learning, that performs our
redesigned MLPs for the learning of real and imaginary part of frequency
components. The above stages operated on both inter-series and intra-series
scales further contribute to channel-wise and time-wise dependency learning.
Extensive experiments on 13 real-world benchmarks (including 7 benchmarks for
short-term forecasting and 6 benchmarks for long-term forecasting) demonstrate
our consistent superiority over state-of-the-art methods
Equivariant Contrastive Learning for Sequential Recommendation
Contrastive learning (CL) benefits the training of sequential recommendation
models with informative self-supervision signals. Existing solutions apply
general sequential data augmentation strategies to generate positive pairs and
encourage their representations to be invariant. However, due to the inherent
properties of user behavior sequences, some augmentation strategies, such as
item substitution, can lead to changes in user intent. Learning
indiscriminately invariant representations for all augmentation strategies
might be suboptimal. Therefore, we propose Equivariant Contrastive Learning for
Sequential Recommendation (ECL-SR), which endows SR models with great
discriminative power, making the learned user behavior representations
sensitive to invasive augmentations (e.g., item substitution) and insensitive
to mild augmentations (e.g., featurelevel dropout masking). In detail, we use
the conditional discriminator to capture differences in behavior due to item
substitution, which encourages the user behavior encoder to be equivariant to
invasive augmentations. Comprehensive experiments on four benchmark datasets
show that the proposed ECL-SR framework achieves competitive performance
compared to state-of-the-art SR models. The source code is available at
https://github.com/Tokkiu/ECL.Comment: Accepted by RecSys 202
A Survey on Deep Learning based Time Series Analysis with Frequency Transformation
Recently, frequency transformation (FT) has been increasingly incorporated
into deep learning models to significantly enhance state-of-the-art accuracy
and efficiency in time series analysis. The advantages of FT, such as high
efficiency and a global view, have been rapidly explored and exploited in
various time series tasks and applications, demonstrating the promising
potential of FT as a new deep learning paradigm for time series analysis.
Despite the growing attention and the proliferation of research in this
emerging field, there is currently a lack of a systematic review and in-depth
analysis of deep learning-based time series models with FT. It is also unclear
why FT can enhance time series analysis and what its limitations in the field
are. To address these gaps, we present a comprehensive review that
systematically investigates and summarizes the recent research advancements in
deep learning-based time series analysis with FT. Specifically, we explore the
primary approaches used in current models that incorporate FT, the types of
neural networks that leverage FT, and the representative FT-equipped models in
deep time series analysis. We propose a novel taxonomy to categorize the
existing methods in this field, providing a structured overview of the diverse
approaches employed in incorporating FT into deep learning models for time
series analysis. Finally, we highlight the advantages and limitations of FT for
time series modeling and identify potential future research directions that can
further contribute to the community of time series analysis