5 research outputs found

    Graph Regularized Nonnegative Latent Factor Analysis Model for Temporal Link Prediction in Cryptocurrency Transaction Networks

    Full text link
    With the development of blockchain technology, the cryptocurrency based on blockchain technology is becoming more and more popular. This gave birth to a huge cryptocurrency transaction network has received widespread attention. Link prediction learning structure of network is helpful to understand the mechanism of network, so it is also widely studied in cryptocurrency network. However, the dynamics of cryptocurrency transaction networks have been neglected in the past researches. We use graph regularized method to link past transaction records with future transactions. Based on this, we propose a single latent factor-dependent, non-negative, multiplicative and graph regularized-incorporated update (SLF-NMGRU) algorithm and further propose graph regularized nonnegative latent factor analysis (GrNLFA) model. Finally, experiments on a real cryptocurrency transaction network show that the proposed method improves both the accuracy and the computational efficienc

    Directed closure coefficient and its patterns.

    Full text link
    The triangle structure, being a fundamental and significant element, underlies many theories and techniques in studying complex networks. The formation of triangles is typically measured by the clustering coefficient, in which the focal node is the centre-node in an open triad. In contrast, the recently proposed closure coefficient measures triangle formation from an end-node perspective and has been proven to be a useful feature in network analysis. Here, we extend it by proposing the directed closure coefficient that measures the formation of directed triangles. By distinguishing the direction of the closing edge in building triangles, we further introduce the source closure coefficient and the target closure coefficient. Then, by categorising particular types of directed triangles (e.g., head-of-path), we propose four closure patterns. Through multiple experiments on 24 directed networks from six domains, we demonstrate that at network-level, the four closure patterns are distinctive features in classifying network types, while at node-level, adding the source and target closure coefficients leads to significant improvement in link prediction task in most types of directed networks

    MUFFLE: Multi-Modal Fake News Influence Estimator on Twitter

    Get PDF
    To alleviate the impact of fake news on our society, predicting the popularity of fake news posts on social media is a crucial problem worthy of study. However, most related studies on fake news emphasize detection only. In this paper, we focus on the issue of fake news influence prediction, i.e., inferring how popular a fake news post might become on social platforms. To achieve our goal, we propose a comprehensive framework, MUFFLE, which captures multi-modal dynamics by encoding the representation of news-related social networks, user characteristics, and content in text. The attention mechanism developed in the model can provide explainability for social or psychological analysis. To examine the effectiveness of MUFFLE, we conducted extensive experiments on real-world datasets. The experimental results show that our proposed method outperforms both state-of-the-art methods of popularity prediction and machine-based baselines in top-k NDCG and hit rate. Through the experiments, we also analyze the feature importance for predicting fake news influence via the explainability provided by MUFFLE

    토큰 단위 분류모델을 위한 중요 토큰 포착 및 시퀀스 인코더 설계 방법

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 전기·정보공학부, 2022. 8. 정교민.With the development of internet, a great of volume of data have accumulated over time. Therefore, dealing long sequential data can become a core problem in web services. For example, streaming services such as YouTube, Netflx and Tictoc have used the user's viewing history sequence to recommend videos that users may like. Such systems have replaced the user's viewed video with each item or token to predict what item or token will be viewed next. These tasks have been defined as Token-Level Classification (TLC) tasks. Given the sequence of tokens, TLC identifies the labels of tokens in the required portion of this sequence. As mentioned above, TLC can be applied to various recommendation Systems. In addition, most of Natural Language Processing (NLP) tasks can also be formulated as TLC problem. For example, sentence and each word within the sentence can be expressed as token-level sequence. In particular, in the case of information extraction, it can be changed to a TLC task that distinguishes whether a specific word span in the sentence is information. The characteristics of TLC datasets are that they are very sparse and long. Therefore, it is a very important problem to extract only important information from the sequences and properly encode them. In this thesis, we propose the method to solve the two academic questions of TLC in Recommendation Systems and information extraction: 1) How to capture important tokens from the token sequence and 2) How to encode a token sequence into model. As deep neural networks (DNNs) have shown outstanding performance in various web application tasks, we design the RNN and Transformer-based model for recommendation systems, and information extractions. In this dissertation, we propose novel models that can extract important tokens for recommendation systems and information extraction systems. In recommendation systems, we design a BART-based system that can capture important portion of token sequence through self-attention mechanisms and consider both bidirectional and left-to-right directional information. In information systems, we present relation network-based models to focus important parts such as opinion target and neighbor words.인터넷의 발달로, 많은 양의 데이터가 시간이 지남에 따라 축적되었다. 이로인해 긴 순차적 데이터를 처리하는 것은 웹 서비스의 핵심 문제가 되었다. 예를 들어, 유튜브, 넷플릭스, 틱톡과 같은 스트리밍 서비스는 사용자의 시청 기록 시퀀스를 사용하여 사용자가 좋아할 만한 비디오를 추천한다. 이러한 시스템은 다음에 어떤 항목이나 토큰을 볼 것인지를 예측하기 위해 사용자가 본 비디오를 각 항목 또는 토큰으로 대체하여 사용할 수 있다. 이러한 작업은 토큰 수준 분류(TLC) 작업으로 정의한다. 토큰 시퀀스가 주어지면, TLC는 이 시퀀스의 필요한 부분에서 토큰의 라벨을 식별한다. 이렇게와 같이, TLC는 다양한 추천 시스템에 적용될 수 있다. 또한 대부분의 자연어 처리(NLP) 작업은 TLC 문제로 공식화될 수 있다. 예를 들어, 문장과 문장 내의 각 단어는 토큰 레벨 시퀀스로 표현될 수 있다. 특히 정보 추출의 경우 문장의 특정 단어 간격이 정보인지 여부를 구분하는 TLC 작업으로 바뀔 수 있다. TLC 데이터 세트의 특징은 매우 희박(Sparse)하고 길다는 것이다. 따라서 시퀀스에서 중요한 정보만 추출하여 적절히 인코딩하는 것은 매우 중요한 문제이다. 본 논문에서는 권장 시스템과 정보 추출에서 TLC의 두 가지 학문적 질문- 1) 토큰 시퀀스에서 중요한 토큰을 캡처하는 방법 및 2) 토큰 시퀀스를 모델로 인코딩하는 방법 을 해결하는 방법을 제안한다. 심층 신경망(DNN)이 다양한 웹 애플리케이션 작업에서 뛰어난 성능을 보여 왔기 때문에 추천 시스템 및 정보 추출을 위한 RNN 및 트랜스포머 기반 모델을 설계한다. 먼저 우리는 자기 주의 메커니즘을 통해 토큰 시퀀스의 중요한 부분을 포착하고 양방향 및 좌우 방향 정보를 모두 고려할 수 있는 BART 기반 추천 시스템을 설계한다. 정보 시스템에서, 우리는 의견 대상과 이웃 단어와 같은 중요한 부분에 초점을 맞추기 위해 관계 네트워크 기반 모델을 제시한다.1. Introduction 1 2. Token-level Classification in Recommendation Systems 8 2.1 Overview 8 2.2 Hierarchical RNN-based Recommendation Systems 19 2.3 Entangled Bidirectional Encoder to Auto-regressive Decoder for Sequential Recommendation 27 3. Token-level Classification in Information Extraction 39 3.1 Overview 39 3.2 RABERT: Relation-Aware BERT for Target-Oriented Opinion Words Extraction 49 3.3 Gated Relational Target-aware Encoder and Local Context-aware Decoder for Target-oriented Opinion Words Extraction 58 4. Conclusion 79박

    Recomendación de localizaciones de negocio usando aprendizaje automático

    Full text link
    Máster en Investigación e Innovación en Inteligencia Computacional y Sistemas InteractivosLa selección de una ubicación para abrir un nuevo negocio se convierte en un desafío dentro delsector empresarial. Tradicionalmente esta selección se realiza a través de estudios de la zona y deconocimientos abstractos de los propios inversores, lo que supone un gran esfuerzo analizar y obtenerresultados significativos para tomar una decisión. El principal problema de elegir un buen lugar esque se deben tener en cuenta varios factores para que la tienda pueda obtener beneficios, en el casocontrario un mal lugar puede precipitar el cierre de este al poco de abrir.En este Trabajo Final de Máster se propone un algoritmo basado en un artículo reciente donde seanalizan diferentes ubicaciones a través de características basadas en la localización de la tienda y decaracterísticas comerciales relacionadas con los tipos de negocio que hay alrededor para recomendarqué tipo de negocio es más factible en esa ubicación. Para lograr esto, utilizamos la técnica de lafactorización de matrices para obtener factores latentes de otras ubicaciones y tipos de negocio, yjunto a otras características predecir la valoración de la ubicación con el tipo de negocio candidato.Para comprobar la eficiencia del algoritmo hemos utilizado el conjunto de datos de Foursquareque ofrece información sobre el posicionamiento de los diferentes negocios y registros de usuariosque incluyen información sobre cuándo acceden a estos. Para evaluar los resultados del modelo quehemos desarrollado, se han utilizado diferentes algoritmos clasificadores comobaselines: KNN, SVM,árboles de decisión y regresión logística. La evaluación de los resultados se ha elaborado a través delas métricas deprecision,recally nDCG.Como conclusión, se ha observado que la eficiencia del algoritmo desarrollado se encuentra limi-tada por el conjunto de datos con el que se entrene, teniendo peores resultados cuando no se tienes uficiente información (algo que ocurre en alguna de las ciudades incluidas en nuestros experimen-tos). Por otro lado, el algoritmo es capaz de obtener factores latentes a partir de conjuntos de datos dispersos y obtener resultados que superan a losbaselinesen la mayoría de situaciones
    corecore