7,016 research outputs found
The Matrix of Lyric Transformation
Pentasyllabic poetry has been a focus of critical study since the appearance of the earliest works of Chinese literary criticism in the Six Dynasties period. Throughout the subsequent dynasties, traditional Chinese critics continued to examine pentasyllabic poetry as a leading poetic type and to compile various comprehensive anthologies of it. The Matrix of Lyric Transformation enriches this tradition, using modern analytical methods to explore issues of self-expression and to trace the early formal, thematic, and generic developments of this poetic form. Beginning with a discussion of the Yüeh-fu and ku-shih genres of the Han period, Cai Zong-qi introdues the analytical framework of modes from Western literary criticism to show how the pentasyllabic poetry changed over time. He argues that changing practices of poetic composition effected a shift from a dramatic mode typical of folk compositions to a narrative mode and finally to lyric and symbolic modes developed in literati circles
Product-based Neural Networks for User Response Prediction
Predicting user responses, such as clicks and conversions, is of great
importance and has found its usage in many Web applications including
recommender systems, web search and online advertising. The data in those
applications is mostly categorical and contains multiple fields; a typical
representation is to transform it into a high-dimensional sparse binary feature
representation via one-hot encoding. Facing with the extreme sparsity,
traditional models may limit their capacity of mining shallow patterns from the
data, i.e. low-order feature combinations. Deep models like deep neural
networks, on the other hand, cannot be directly applied for the
high-dimensional input because of the huge feature space. In this paper, we
propose a Product-based Neural Networks (PNN) with an embedding layer to learn
a distributed representation of the categorical data, a product layer to
capture interactive patterns between inter-field categories, and further fully
connected layers to explore high-order feature interactions. Our experimental
results on two large-scale real-world ad click datasets demonstrate that PNNs
consistently outperform the state-of-the-art models on various metrics.Comment: 6 pages, 5 figures, ICDM201
LLMs are Good Action Recognizers
Skeleton-based action recognition has attracted lots of research attention.
Recently, to build an accurate skeleton-based action recognizer, a variety of
works have been proposed. Among them, some works use large model architectures
as backbones of their recognizers to boost the skeleton data representation
capability, while some other works pre-train their recognizers on external data
to enrich the knowledge. In this work, we observe that large language models
which have been extensively used in various natural language processing tasks
generally hold both large model architectures and rich implicit knowledge.
Motivated by this, we propose a novel LLM-AR framework, in which we investigate
treating the Large Language Model as an Action Recognizer. In our framework, we
propose a linguistic projection process to project each input action signal
(i.e., each skeleton sequence) into its ``sentence format'' (i.e., an ``action
sentence''). Moreover, we also incorporate our framework with several designs
to further facilitate this linguistic projection process. Extensive experiments
demonstrate the efficacy of our proposed framework.Comment: CVPR 202
LMC: Large Model Collaboration with Cross-assessment for Training-Free Open-Set Object Recognition
Open-set object recognition aims to identify if an object is from a class
that has been encountered during training or not. To perform open-set object
recognition accurately, a key challenge is how to reduce the reliance on
spurious-discriminative features. In this paper, motivated by that different
large models pre-trained through different paradigms can possess very rich
while distinct implicit knowledge, we propose a novel framework named Large
Model Collaboration (LMC) to tackle the above challenge via collaborating
different off-the-shelf large models in a training-free manner. Moreover, we
also incorporate the proposed framework with several novel designs to
effectively extract implicit knowledge from large models. Extensive experiments
demonstrate the efficacy of our proposed framework. Code is available
\href{https://github.com/Harryqu123/LMC}{here}.Comment: NeurIPS 202
- …