13 research outputs found
PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs
Pre-training on large models is prevalent and emerging with the ever-growing
user-generated content in many machine learning application categories. It has
been recognized that learning contextual knowledge from the datasets depicting
user-content interaction plays a vital role in downstream tasks. Despite
several studies attempting to learn contextual knowledge via pre-training
methods, finding an optimal training objective and strategy for this type of
task remains a challenging problem. In this work, we contend that there are two
distinct aspects of contextual knowledge, namely the user-side and the
content-side, for datasets where user-content interaction can be represented as
a bipartite graph. To learn contextual knowledge, we propose a pre-training
method that learns a bi-directional mapping between the spaces of the user-side
and the content-side. We formulate the training goal as a contrastive learning
task and propose a dual-Transformer architecture to encode the contextual
knowledge. We evaluate the proposed method for the recommendation task. The
empirical studies have demonstrated that the proposed method outperformed all
the baselines with significant gains
Matrix Profile XXVII: A Novel Distance Measure for Comparing Long Time Series
The most useful data mining primitives are distance measures. With an
effective distance measure, it is possible to perform classification,
clustering, anomaly detection, segmentation, etc. For single-event time series
Euclidean Distance and Dynamic Time Warping distance are known to be extremely
effective. However, for time series containing cyclical behaviors, the semantic
meaningfulness of such comparisons is less clear. For example, on two separate
days the telemetry from an athlete workout routine might be very similar. The
second day may change the order in of performing push-ups and squats, adding
repetitions of pull-ups, or completely omitting dumbbell curls. Any of these
minor changes would defeat existing time series distance measures. Some
bag-of-features methods have been proposed to address this problem, but we
argue that in many cases, similarity is intimately tied to the shapes of
subsequences within these longer time series. In such cases, summative features
will lack discrimination ability. In this work we introduce PRCIS, which stands
for Pattern Representation Comparison in Series. PRCIS is a distance measure
for long time series, which exploits recent progress in our ability to
summarize time series with dictionaries. We will demonstrate the utility of our
ideas on diverse tasks and datasets.Comment: Accepted at IEEE ICKG 2022. (Previously entitled IEEE ICBK.) Abridged
abstract as per arxiv's requirement
FATA-Trans: Field And Time-Aware Transformer for Sequential Tabular Data
Sequential tabular data is one of the most commonly used data types in
real-world applications. Different from conventional tabular data, where rows
in a table are independent, sequential tabular data contains rich contextual
and sequential information, where some fields are dynamically changing over
time and others are static. Existing transformer-based approaches analyzing
sequential tabular data overlook the differences between dynamic and static
fields by replicating and filling static fields into each transformer, and
ignore temporal information between rows, which leads to three major
disadvantages: (1) computational overhead, (2) artificially simplified data for
masked language modeling pre-training task that may yield less meaningful
representations, and (3) disregarding the temporal behavioral patterns implied
by time intervals. In this work, we propose FATA-Trans, a model with two field
transformers for modeling sequential tabular data, where each processes static
and dynamic field information separately. FATA-Trans is field- and time-aware
for sequential tabular data. The field-type embedding in the method enables
FATA-Trans to capture differences between static and dynamic fields. The
time-aware position embedding exploits both order and time interval information
between rows, which helps the model detect underlying temporal behavior in a
sequence. Our experiments on three benchmark datasets demonstrate that the
learned representations from FATA-Trans consistently outperform
state-of-the-art solutions in the downstream tasks. We also present
visualization studies to highlight the insights captured by the learned
representations, enhancing our understanding of the underlying data. Our codes
are available at https://github.com/zdy93/FATA-Trans.Comment: This work is accepted by ACM International Conference on Information
and Knowledge Management (CIKM) 202
Multitask Learning for Time Series Data with 2D Convolution
Multitask learning (MTL) aims to develop a unified model that can handle a
set of closely related tasks simultaneously. By optimizing the model across
multiple tasks, MTL generally surpasses its non-MTL counterparts in terms of
generalizability. Although MTL has been extensively researched in various
domains such as computer vision, natural language processing, and
recommendation systems, its application to time series data has received
limited attention. In this paper, we investigate the application of MTL to the
time series classification (TSC) problem. However, when we integrate the
state-of-the-art 1D convolution-based TSC model with MTL, the performance of
the TSC model actually deteriorates. By comparing the 1D convolution-based
models with the Dynamic Time Warping (DTW) distance function, it appears that
the underwhelming results stem from the limited expressive power of the 1D
convolutional layers. To overcome this challenge, we propose a novel design for
a 2D convolution-based model that enhances the model's expressiveness.
Leveraging this advantage, our proposed method outperforms competing approaches
on both the UCR Archive and an industrial transaction TSC dataset
Toward a Foundation Model for Time Series Data
A foundation model is a machine learning model trained on a large and diverse
set of data, typically using self-supervised learning-based pre-training
techniques, that can be adapted to various downstream tasks. However, current
research on time series pre-training has mostly focused on models pre-trained
solely on data from a single domain, resulting in a lack of knowledge about
other types of time series. However, current research on time series
pre-training has predominantly focused on models trained exclusively on data
from a single domain. As a result, these models possess domain-specific
knowledge that may not be easily transferable to time series from other
domains. In this paper, we aim to develop an effective time series foundation
model by leveraging unlabeled samples from multiple domains. To achieve this,
we repurposed the publicly available UCR Archive and evaluated four existing
self-supervised learning-based pre-training methods, along with a novel method,
on the datasets. We tested these methods using four popular neural network
architectures for time series to understand how the pre-training methods
interact with different network designs. Our experimental results show that
pre-training improves downstream classification tasks by enhancing the
convergence of the fine-tuning process. Furthermore, we found that the proposed
pre-training method, when combined with the Transformer model, outperforms the
alternatives