10,664 research outputs found

    Multi-Relational Contrastive Learning for Recommendation

    Full text link
    Personalized recommender systems play a crucial role in capturing users' evolving preferences over time to provide accurate and effective recommendations on various online platforms. However, many recommendation models rely on a single type of behavior learning, which limits their ability to represent the complex relationships between users and items in real-life scenarios. In such situations, users interact with items in multiple ways, including clicking, tagging as favorite, reviewing, and purchasing. To address this issue, we propose the Relation-aware Contrastive Learning (RCL) framework, which effectively models dynamic interaction heterogeneity. The RCL model incorporates a multi-relational graph encoder that captures short-term preference heterogeneity while preserving the dedicated relation semantics for different types of user-item interactions. Moreover, we design a dynamic cross-relational memory network that enables the RCL model to capture users' long-term multi-behavior preferences and the underlying evolving cross-type behavior dependencies over time. To obtain robust and informative user representations with both commonality and diversity across multi-behavior interactions, we introduce a multi-relational contrastive learning paradigm with heterogeneous short- and long-term interest modeling. Our extensive experimental studies on several real-world datasets demonstrate the superiority of the RCL recommender system over various state-of-the-art baselines in terms of recommendation accuracy and effectiveness.Comment: This paper has been published as a full paper at RecSys 202

    Leveraging Multi-level Dependency of Relational Sequences for Social Spammer Detection

    Full text link
    Much recent research has shed light on the development of the relation-dependent but content-independent framework for social spammer detection. This is largely because the relation among users is difficult to be altered when spammers attempt to conceal their malicious intents. Our study investigates the spammer detection problem in the context of multi-relation social networks, and makes an attempt to fully exploit the sequences of heterogeneous relations for enhancing the detection accuracy. Specifically, we present the Multi-level Dependency Model (MDM). The MDM is able to exploit user's long-term dependency hidden in their relational sequences along with short-term dependency. Moreover, MDM fully considers short-term relational sequences from the perspectives of individual-level and union-level, due to the fact that the type of short-term sequences is multi-folds. Experimental results on a real-world multi-relational social network demonstrate the effectiveness of our proposed MDM on multi-relational social spammer detection

    A Survey on Knowledge Graphs: Representation, Acquisition and Applications

    Full text link
    Human knowledge provides a formal understanding of the world. Knowledge graphs that represent structural relations between entities have become an increasingly popular research direction towards cognition and human-level intelligence. In this survey, we provide a comprehensive review of knowledge graph covering overall research topics about 1) knowledge graph representation learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph, and 4) knowledge-aware applications, and summarize recent breakthroughs and perspective directions to facilitate future research. We propose a full-view categorization and new taxonomies on these topics. Knowledge graph embedding is organized from four aspects of representation space, scoring function, encoding models, and auxiliary information. For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning, are reviewed. We further explore several emerging topics, including meta relational learning, commonsense reasoning, and temporal knowledge graphs. To facilitate future research on knowledge graphs, we also provide a curated collection of datasets and open-source libraries on different tasks. In the end, we have a thorough outlook on several promising research directions

    Serialized Knowledge Enhanced Multi-objective Person-job Matching Recommendation in a High Mobility Job Market

    Get PDF
    In a high mobility job market, accumulated historical sequences information from persons and jobs bring opportunities and challenges to person-job matching recommendation, where the latent preferences may significantly determine the success of person-job matching. Moreover, the sparse labels further limit the learning performance of recommendation methods. To this end, we propose a novel serialized knowledge enhancement multi-objective person-job matching recommendation method, namely SMP-JM. The key idea is to design a serialized multi-objective method from โ€œintention-delivery-reviewโ€, which effectively solves the problem of sparsity through the transmission of information and the serialization constraints between objectives. Specifically, we design various attention modules, such as self-attention, cross-attention and an orthogonal multi-head attention, to identify correlations between diversified features. Furthermore, a multi-granularity convolutional filtering module is design to extract personal latent preference from the historical sequential behaviors. Finally, the experimental results on a real-world dataset validate the performance of SMP-JM over the baseline methods

    ํ† ํฐ ๋‹จ์œ„ ๋ถ„๋ฅ˜๋ชจ๋ธ์„ ์œ„ํ•œ ์ค‘์š” ํ† ํฐ ํฌ์ฐฉ ๋ฐ ์‹œํ€€์Šค ์ธ์ฝ”๋” ์„ค๊ณ„ ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2022. 8. ์ •๊ต๋ฏผ.With the development of internet, a great of volume of data have accumulated over time. Therefore, dealing long sequential data can become a core problem in web services. For example, streaming services such as YouTube, Netflx and Tictoc have used the user's viewing history sequence to recommend videos that users may like. Such systems have replaced the user's viewed video with each item or token to predict what item or token will be viewed next. These tasks have been defined as Token-Level Classification (TLC) tasks. Given the sequence of tokens, TLC identifies the labels of tokens in the required portion of this sequence. As mentioned above, TLC can be applied to various recommendation Systems. In addition, most of Natural Language Processing (NLP) tasks can also be formulated as TLC problem. For example, sentence and each word within the sentence can be expressed as token-level sequence. In particular, in the case of information extraction, it can be changed to a TLC task that distinguishes whether a specific word span in the sentence is information. The characteristics of TLC datasets are that they are very sparse and long. Therefore, it is a very important problem to extract only important information from the sequences and properly encode them. In this thesis, we propose the method to solve the two academic questions of TLC in Recommendation Systems and information extraction: 1) How to capture important tokens from the token sequence and 2) How to encode a token sequence into model. As deep neural networks (DNNs) have shown outstanding performance in various web application tasks, we design the RNN and Transformer-based model for recommendation systems, and information extractions. In this dissertation, we propose novel models that can extract important tokens for recommendation systems and information extraction systems. In recommendation systems, we design a BART-based system that can capture important portion of token sequence through self-attention mechanisms and consider both bidirectional and left-to-right directional information. In information systems, we present relation network-based models to focus important parts such as opinion target and neighbor words.์ธํ„ฐ๋„ท์˜ ๋ฐœ๋‹ฌ๋กœ, ๋งŽ์€ ์–‘์˜ ๋ฐ์ดํ„ฐ๊ฐ€ ์‹œ๊ฐ„์ด ์ง€๋‚จ์— ๋”ฐ๋ผ ์ถ•์ ๋˜์—ˆ๋‹ค. ์ด๋กœ์ธํ•ด ๊ธด ์ˆœ์ฐจ์  ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์€ ์›น ์„œ๋น„์Šค์˜ ํ•ต์‹ฌ ๋ฌธ์ œ๊ฐ€ ๋˜์—ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์œ ํŠœ๋ธŒ, ๋„ทํ”Œ๋ฆญ์Šค, ํ‹ฑํ†ก๊ณผ ๊ฐ™์€ ์ŠคํŠธ๋ฆฌ๋ฐ ์„œ๋น„์Šค๋Š” ์‚ฌ์šฉ์ž์˜ ์‹œ์ฒญ ๊ธฐ๋ก ์‹œํ€€์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž๊ฐ€ ์ข‹์•„ํ•  ๋งŒํ•œ ๋น„๋””์˜ค๋ฅผ ์ถ”์ฒœํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ์‹œ์Šคํ…œ์€ ๋‹ค์Œ์— ์–ด๋–ค ํ•ญ๋ชฉ์ด๋‚˜ ํ† ํฐ์„ ๋ณผ ๊ฒƒ์ธ์ง€๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ์ž๊ฐ€ ๋ณธ ๋น„๋””์˜ค๋ฅผ ๊ฐ ํ•ญ๋ชฉ ๋˜๋Š” ํ† ํฐ์œผ๋กœ ๋Œ€์ฒดํ•˜์—ฌ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์—…์€ ํ† ํฐ ์ˆ˜์ค€ ๋ถ„๋ฅ˜(TLC) ์ž‘์—…์œผ๋กœ ์ •์˜ํ•œ๋‹ค. ํ† ํฐ ์‹œํ€€์Šค๊ฐ€ ์ฃผ์–ด์ง€๋ฉด, TLC๋Š” ์ด ์‹œํ€€์Šค์˜ ํ•„์š”ํ•œ ๋ถ€๋ถ„์—์„œ ํ† ํฐ์˜ ๋ผ๋ฒจ์„ ์‹๋ณ„ํ•œ๋‹ค. ์ด๋ ‡๊ฒŒ์™€ ๊ฐ™์ด, TLC๋Š” ๋‹ค์–‘ํ•œ ์ถ”์ฒœ ์‹œ์Šคํ…œ์— ์ ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ๋Œ€๋ถ€๋ถ„์˜ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) ์ž‘์—…์€ TLC ๋ฌธ์ œ๋กœ ๊ณต์‹ํ™”๋  ์ˆ˜ ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌธ์žฅ๊ณผ ๋ฌธ์žฅ ๋‚ด์˜ ๊ฐ ๋‹จ์–ด๋Š” ํ† ํฐ ๋ ˆ๋ฒจ ์‹œํ€€์Šค๋กœ ํ‘œํ˜„๋  ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ ์ •๋ณด ์ถ”์ถœ์˜ ๊ฒฝ์šฐ ๋ฌธ์žฅ์˜ ํŠน์ • ๋‹จ์–ด ๊ฐ„๊ฒฉ์ด ์ •๋ณด์ธ์ง€ ์—ฌ๋ถ€๋ฅผ ๊ตฌ๋ถ„ํ•˜๋Š” TLC ์ž‘์—…์œผ๋กœ ๋ฐ”๋€” ์ˆ˜ ์žˆ๋‹ค. TLC ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŠน์ง•์€ ๋งค์šฐ ํฌ๋ฐ•(Sparse)ํ•˜๊ณ  ๊ธธ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ๋”ฐ๋ผ์„œ ์‹œํ€€์Šค์—์„œ ์ค‘์š”ํ•œ ์ •๋ณด๋งŒ ์ถ”์ถœํ•˜์—ฌ ์ ์ ˆํžˆ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๊ฒƒ์€ ๋งค์šฐ ์ค‘์š”ํ•œ ๋ฌธ์ œ์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ถŒ์žฅ ์‹œ์Šคํ…œ๊ณผ ์ •๋ณด ์ถ”์ถœ์—์„œ TLC์˜ ๋‘ ๊ฐ€์ง€ ํ•™๋ฌธ์  ์งˆ๋ฌธ- 1) ํ† ํฐ ์‹œํ€€์Šค์—์„œ ์ค‘์š”ํ•œ ํ† ํฐ์„ ์บก์ฒ˜ํ•˜๋Š” ๋ฐฉ๋ฒ• ๋ฐ 2) ํ† ํฐ ์‹œํ€€์Šค๋ฅผ ๋ชจ๋ธ๋กœ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๋ฐฉ๋ฒ• ์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง(DNN)์ด ๋‹ค์–‘ํ•œ ์›น ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ์ž‘์—…์—์„œ ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ ์™”๊ธฐ ๋•Œ๋ฌธ์— ์ถ”์ฒœ ์‹œ์Šคํ…œ ๋ฐ ์ •๋ณด ์ถ”์ถœ์„ ์œ„ํ•œ RNN ๋ฐ ํŠธ๋žœ์Šคํฌ๋จธ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ์„ค๊ณ„ํ•œ๋‹ค. ๋จผ์ € ์šฐ๋ฆฌ๋Š” ์ž๊ธฐ ์ฃผ์˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ํ†ตํ•ด ํ† ํฐ ์‹œํ€€์Šค์˜ ์ค‘์š”ํ•œ ๋ถ€๋ถ„์„ ํฌ์ฐฉํ•˜๊ณ  ์–‘๋ฐฉํ–ฅ ๋ฐ ์ขŒ์šฐ ๋ฐฉํ–ฅ ์ •๋ณด๋ฅผ ๋ชจ๋‘ ๊ณ ๋ คํ•  ์ˆ˜ ์žˆ๋Š” BART ๊ธฐ๋ฐ˜ ์ถ”์ฒœ ์‹œ์Šคํ…œ์„ ์„ค๊ณ„ํ•œ๋‹ค. ์ •๋ณด ์‹œ์Šคํ…œ์—์„œ, ์šฐ๋ฆฌ๋Š” ์˜๊ฒฌ ๋Œ€์ƒ๊ณผ ์ด์›ƒ ๋‹จ์–ด์™€ ๊ฐ™์€ ์ค‘์š”ํ•œ ๋ถ€๋ถ„์— ์ดˆ์ ์„ ๋งž์ถ”๊ธฐ ์œ„ํ•ด ๊ด€๊ณ„ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ์ œ์‹œํ•œ๋‹ค.1. Introduction 1 2. Token-level Classification in Recommendation Systems 8 2.1 Overview 8 2.2 Hierarchical RNN-based Recommendation Systems 19 2.3 Entangled Bidirectional Encoder to Auto-regressive Decoder for Sequential Recommendation 27 3. Token-level Classification in Information Extraction 39 3.1 Overview 39 3.2 RABERT: Relation-Aware BERT for Target-Oriented Opinion Words Extraction 49 3.3 Gated Relational Target-aware Encoder and Local Context-aware Decoder for Target-oriented Opinion Words Extraction 58 4. Conclusion 79๋ฐ•
    • โ€ฆ
    corecore