163 research outputs found
Calibration of Time-Series Forecasting Transformers: Detecting and Adapting Context-Driven Distribution Shift
Recent years have witnessed the success of introducing Transformers to time
series forecasting. From a data generation perspective, we illustrate that
existing Transformers are susceptible to distribution shifts driven by temporal
contexts, whether observed or unobserved. Such context-driven distribution
shift (CDS) introduces biases in predictions within specific contexts and poses
challenges for conventional training paradigm. In this paper, we introduce a
universal calibration methodology for the detection and adaptation of CDS with
a trained Transformer model. To this end, we propose a novel CDS detector,
termed the "residual-based CDS detector" or "Reconditionor", which quantifies
the model's vulnerability to CDS by evaluating the mutual information between
prediction residuals and their corresponding contexts. A high Reconditionor
score indicates a severe susceptibility, thereby necessitating model
adaptation. In this circumstance, we put forth a straightforward yet potent
adapter framework for model calibration, termed the "sample-level
contextualized adapter" or "SOLID". This framework involves the curation of a
contextually similar dataset to the provided test sample and the subsequent
fine-tuning of the model's prediction layer with a limited number of steps. Our
theoretical analysis demonstrates that this adaptation strategy is able to
achieve an optimal equilibrium between bias and variance. Notably, our proposed
Reconditionor and SOLID are model-agnostic and readily adaptable to a wide
range of Transformers. Extensive experiments show that SOLID consistently
enhances the performance of current SOTA Transformers on real-world datasets,
especially on cases with substantial CDS detected by the proposed
Reconditionor, thus validate the effectiveness of the calibration approach
ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt
Recent research has demonstrated the efficacy of pre-training graph neural
networks (GNNs) to capture the transferable graph semantics and enhance the
performance of various downstream tasks. However, the semantic knowledge
learned from pretext tasks might be unrelated to the downstream task, leading
to a semantic gap that limits the application of graph pre-training. To reduce
this gap, traditional approaches propose hybrid pre-training to combine various
pretext tasks together in a multi-task learning fashion and learn multi-grained
knowledge, which, however, cannot distinguish tasks and results in some
transferable task-specific knowledge distortion by each other. Moreover, most
GNNs cannot distinguish nodes located in different parts of the graph, making
them fail to learn position-specific knowledge and lead to suboptimal
performance. In this work, inspired by the prompt-based tuning in natural
language processing, we propose a unified framework for graph hybrid
pre-training which injects the task identification and position identification
into GNNs through a prompt mechanism, namely multi-task graph dual prompt
(ULTRA-DP). Based on this framework, we propose a prompt-based transferability
test to find the most relevant pretext task in order to reduce the semantic
gap. To implement the hybrid pre-training tasks, beyond the classical edge
prediction task (node-node level), we further propose a novel pre-training
paradigm based on a group of -nearest neighbors (node-group level). The
combination of them across different scales is able to comprehensively express
more structural semantics and derive richer multi-grained knowledge. Extensive
experiments show that our proposed ULTRA-DP can significantly enhance the
performance of hybrid pre-training methods and show the generalizability to
other pre-training tasks and backbone architectures
Learning transferrable parameters for long-tailed sequential user behavior modeling
National Research Foundation (NRF) Singapore under its AI Singapore Programm
Compositional coding for collaborative filtering
National Research Foundation (NRF) Singapore under its AI Singapore Programm
Regulatory network of GSK3-like kinases and their role in plant stress response
Glycogen synthase kinase 3 (GSK3) family members are evolutionally conserved Ser/Thr protein kinases in mammals and plants. In plants, the GSK3s function as signaling hubs to integrate the perception and transduction of diverse signals required for plant development. Despite their role in the regulation of plant growth and development, emerging research has shed light on their multilayer function in plant stress responses. Here we review recent advances in the regulatory network of GSK3s and the involvement of GSK3s in plant adaptation to various abiotic and biotic stresses. We also discuss the molecular mechanisms underlying how plants cope with environmental stresses through GSK3s-hormones crosstalk, a pivotal biochemical pathway in plant stress responses. We believe that our overview of the versatile physiological functions of GSK3s and underlined molecular mechanism of GSK3s in plant stress response will not only opens further research on this important topic but also provide opportunities for developing stress-resilient crops through the use of genetic engineering technology
- …