135 research outputs found

    グラフ圧縮による媒介中心性の計算手法

    Get PDF
    本論文はグラフの各点の媒介中心性を求める計算手法を提案する.それは次数が1である点をグラフから除き,圧縮されたグラフで計算する.提案手法が,従来の手法の次数が1である点が存在するグラフで生じる冗長な計算を回避し,計算量を削減することを示す.This paper proposes a computation method to find a betweenness centrality of each vertex of a graph. The method compresses the original graph by removing vertices whose degree is one from the graph. The betweenness centrality is then calculated from the compressed graph. This avoids avoid blackundancy of the computation in the conventional method without the graph compression. As a result, the calculation time is blackuced

    Annotation-Scheme Reconstruction for "Fake News" and Japanese Fake News Dataset

    Full text link
    Fake news provokes many societal problems; therefore, there has been extensive research on fake news detection tasks to counter it. Many fake news datasets were constructed as resources to facilitate this task. Contemporary research focuses almost exclusively on the factuality aspect of the news. However, this aspect alone is insufficient to explain "fake news," which is a complex phenomenon that involves a wide range of issues. To fully understand the nature of each instance of fake news, it is important to observe it from various perspectives, such as the intention of the false news disseminator, the harmfulness of the news to our society, and the target of the news. We propose a novel annotation scheme with fine-grained labeling based on detailed investigations of existing fake news datasets to capture these various aspects of fake news. Using the annotation scheme, we construct and publish the first Japanese fake news dataset. The annotation scheme is expected to provide an in-depth understanding of fake news. We plan to build datasets for both Japanese and other languages using our scheme. Our Japanese dataset is published at https://hkefka385.github.io/dataset/fakenews-japanese/.Comment: 13th International Conference on Language Resources and Evaluation (LREC), 202
    corecore