14 research outputs found

    Random Matrix Theory and Cross-correlations in Global Financial Indices and Local Stock Market Indices

    Full text link
    We analyzed cross-correlations between price fluctuations of global financial indices (20 daily stock indices over the world) and local indices (daily indices of 200 companies in the Korean stock market) by using random matrix theory (RMT). We compared eigenvalues and components of the largest and the second largest eigenvectors of the cross-correlation matrix before, during, and after the global financial the crisis in the year 2008. We find that the majority of its eigenvalues fall within the RMT bounds [{\lambda}_, {\lambda}+], where {\lambda}_- and {\lambda}_+ are the lower and the upper bounds of the eigenvalues of random correlation matrices. The components of the eigenvectors for the largest positive eigenvalues indicate the identical financial market mode dominating the global and local indices. On the other hand, the components of the eigenvector corresponding to the second largest eigenvalue are positive and negative values alternatively. The components before the crisis change sign during the crisis, and those during the crisis change sign after the crisis. The largest inverse participation ratio (IPR) corresponding to the smallest eigenvector is higher after the crisis than during any other periods in the global and local indices. During the global financial the crisis, the correlations among the global indices and among the local stock indices are perturbed significantly. However, the correlations between indices quickly recover the trends before the crisis

    Clustering coefficients for networks with higher order interactions

    Get PDF
    We introduce a clustering coefficient for nondirected and directed hypergraphs, which we call the quad clustering coefficient. We determine the average quad clustering coefficient and its distribution in real-world hypergraphs and compare its value with those of random hypergraphs drawn from the configuration model. We find that real-world hypergraphs exhibit a nonnegligible fraction of nodes with a maximal value of the quad clustering coefficient, while we do not find such nodes in random hypergraphs. Interestingly, these highly clustered nodes can have large degrees and can be incident to hyperedges of large cardinality. Moreover, highly clustered nodes are not observed in an analysis based on the pairwise clustering coefficient of the associated projected graph that has binary interactions, and hence higher order interactions are required to identify nodes with a large quad clustering coefficient

    Clustering coefficients for networks with higher order interactions

    Get PDF
    We introduce a clustering coefficient for nondirected and directed hypergraphs, which we call the quad clustering coefficient. We determine the average quad clustering coefficient and its distribution in real-world hypergraphs and compare its value with those of random hypergraphs drawn from the configuration model. We find that clustering in real-world hypergraphs is significantly different from those of random hypergraphs. Notably, we find that real-world hypergraphs exhibit a nonnegligible fraction of nodes with a maximal value of the quad clustering coefficient, while we do not find such nodes in random hypergraphs. Moreover, these highly clustered nodes are not observed in an analysis based on the pairwise clustering coefficient of the associated projected graph that has binary interactions, and hence higher order interactions are required to identify nodes with a large quad clustering coefficient.Comment: 29 pages, 18 figure

    HyperCLOVA X Technical Report

    Full text link
    We introduce HyperCLOVA X, a family of large language models (LLMs) tailored to the Korean language and culture, along with competitive capabilities in English, math, and coding. HyperCLOVA X was trained on a balanced mix of Korean, English, and code data, followed by instruction-tuning with high-quality human-annotated datasets while abiding by strict safety guidelines reflecting our commitment to responsible AI. The model is evaluated across various benchmarks, including comprehensive reasoning, knowledge, commonsense, factuality, coding, math, chatting, instruction-following, and harmlessness, in both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in Korean backed by a deep understanding of the language and cultural nuances. Further analysis of the inherent bilingual nature and its extension to multilingualism highlights the model's cross-lingual proficiency and strong generalization ability to untargeted languages, including machine translation between several language pairs and cross-lingual inference tasks. We believe that HyperCLOVA X can provide helpful guidance for regions or countries in developing their sovereign LLMs.Comment: 44 pages; updated authors list and fixed author name
    corecore