283 research outputs found
More is simpler : effectively and efficiently assessing node-pair similarities based on hyperlinks
Similarity assessment is one of the core tasks in hyperlink analysis. Recently, with the proliferation of applications, e.g., web search and collaborative filtering, SimRank has been a well-studied measure of similarity between two nodes in a graph. It recursively follows the philosophy that "two nodes are similar if they are referenced (have incoming edges) from similar nodes", which can be viewed as an aggregation of similarities based on incoming paths. Despite its popularity, SimRank has an undesirable property, i.e., "zero-similarity": It only accommodates paths with equal length from a common "center" node. Thus, a large portion of other paths are fully ignored. This paper attempts to remedy this issue. (1) We propose and rigorously justify SimRank*, a revised version of SimRank, which resolves such counter-intuitive "zero-similarity" issues while inheriting merits of the basic SimRank philosophy. (2) We show that the series form of SimRank* can be reduced to a fairly succinct and elegant closed form, which looks even simpler than SimRank, yet enriches semantics without suffering from increased computational cost. This leads to a fixed-point iterative paradigm of SimRank* in O(Knm) time on a graph of n nodes and m edges for K iterations, which is comparable to SimRank. (3) To further optimize SimRank* computation, we leverage a novel clustering strategy via edge concentration. Due to its NP-hardness, we devise an efficient and effective heuristic to speed up SimRank* computation to O(Knm) time, where m is generally much smaller than m. (4) Using real and synthetic data, we empirically verify the rich semantics of SimRank*, and demonstrate its high computation efficiency
Fast incremental SimRank on link-evolving graphs
SimRank is an arresting measure of node-pair similarity based on hyperlinks. It iteratively follows the concept that 2 nodes are similar if they are referenced by similar nodes. Real graphs are often large, and links constantly evolve with small changes over time. This paper considers fast incremental computations of SimRank on link-evolving graphs. The prior approach [12] to this issue factorizes the graph via a singular value decomposition (SVD) first, and then incrementally maintains this factorization for link updates at the expense of exactness. Consequently, all node-pair similarities are estimated in O(r4n2) time on a graph of n nodes, where r is the target rank of the low-rank approximation, which is not negligibly small in practice. In this paper, we propose a novel fast incremental paradigm. (1) We characterize the SimRank update matrix ΔS, in response to every link update, via a rank-one Sylvester matrix equation. By virtue of this, we devise a fast incremental algorithm computing similarities of n2 node-pairs in O(Kn2) time for K iterations. (2) We also propose an effective pruning technique capturing the “affected areas” of ΔS to skip unnecessary computations, without loss of exactness. This can further accelerate the incremental SimRank computation to O(K(nd+|AFF|)) time, where d is the average in-degree of the old graph, and |AFF| (≤ n2) is the size of “affected areas” in ΔS, and in practice, |AFF| ≪ n2. Our empirical evaluations verify that our algorithm (a) outperforms the best known link-update algorithm [12], and (b) runs much faster than its batch counterpart when link updates are small
What Determines the Dynamic Bank Profitability in a Developing Economy? Evidence from Commercial Banks in China
This study exactly focuses on the main determinants that influence the profitability of commercial banks in China, which is measured by three significant variables namely, return on average assets (ROAA), return on average equity (ROAE) and net interest margin (NIM). Bank-specific and macroeconomic determinants are selected to examine bank profitability. Pooled, fixed-effects and random-effects models are used as preliminary assessment methods, eventually, system GMM is built on panel data from 2014 to 2018 for 165 commercial banks of China. Accordingly, from the perspective of bank-specific factors, the results indicate that the credit risk, capital adequacy, size and asset liquidity have a vital impact on the profitability of commercial banks in China. The excessive expansion of bank size will significantly limit the increase in profitability; higher impaired loans will weaken the profitability; asset quality and profitability are significant negative correlation; liquidity and profitability of commercial banks are positively correlated but insignificance. However, the macroeconomic environment has a positive but insignificant influence on commercial banks’ profitability in China overall. On this basis, this research proposes corresponding policy recommendations
IRWR: Incremental Random Walk with Restart
ABSTRACT Random Walk with Restart (RWR) has become an appealing measure of node proximities in emerging applications e.g., recommender systems and automatic image captioning. In practice, a real graph is typically large, and is frequently updated with small changes. It is often cost-inhibitive to recompute proximities from scratch via batch algorithms when the graph is updated. This paper focuses on the incremental computations of RWR in a dynamic graph, whose edges often change over time. The prior attempt of RWR [1] deploys k-dash to find top-k highest proximity nodes for a given query, which involves a strategy to incrementally estimate upper proximity bounds. However, due to its aim to prune needless calculation, such an incremental strategy is approximate: in O(1) time for each node. The main contribution of this paper is to devise an exact and fast incremental algorithm of RWR for edge updates. Our solution, IRWR , can incrementally compute any node proximity in O(1) time for each edge update without loss of exactness. The empirical evaluations show the high efficiency and exactness of IRWR for computing proximities on dynamic networks against its batch counterparts
TQ-Net: Mixed Contrastive Representation Learning For Heterogeneous Test Questions
Recently, more and more people study online for the convenience of access to
massive learning materials (e.g. test questions/notes), thus accurately
understanding learning materials became a crucial issue, which is essential for
many educational applications. Previous studies focus on using language models
to represent the question data. However, test questions (TQ) are usually
heterogeneous and multi-modal, e.g., some of them may only contain text, while
others half contain images with information beyond their literal description.
In this context, both supervised and unsupervised methods are difficult to
learn a fused representation of questions. Meanwhile, this problem cannot be
solved by conventional methods such as image caption, as the images may contain
information complementary rather than duplicate to the text. In this paper, we
first improve previous text-only representation with a two-stage unsupervised
instance level contrastive based pre-training method (MCL: Mixture Unsupervised
Contrastive Learning). Then, TQ-Net was proposed to fuse the content of images
to the representation of heterogeneous data. Finally, supervised contrastive
learning was conducted on relevance prediction-related downstream tasks, which
helped the model to learn the representation of questions effectively. We
conducted extensive experiments on question-based tasks on large-scale,
real-world datasets, which demonstrated the effectiveness of TQ-Net and improve
the precision of downstream applications (e.g. similar questions +2.02% and
knowledge point prediction +7.20%). Our code will be available, and we will
open-source a subset of our data to promote the development of relative
studies.Comment: This paper has been accepted for the AAAI2023 AI4Edu Worksho
Content Popularity Prediction Towards Location-Aware Mobile Edge Caching
Mobile edge caching enables content delivery within the radio access network,
which effectively alleviates the backhaul burden and reduces response time. To
fully exploit edge storage resources, the most popular contents should be
identified and cached. Observing that user demands on certain contents vary
greatly at different locations, this paper devises location-customized caching
schemes to maximize the total content hit rate. Specifically, a linear model is
used to estimate the future content hit rate. For the case where the model
noise is zero-mean, a ridge regression based online algorithm with positive
perturbation is proposed. Regret analysis indicates that the proposed algorithm
asymptotically approaches the optimal caching strategy in the long run. When
the noise structure is unknown, an filter based online algorithm
is further proposed by taking a prescribed threshold as input, which guarantees
prediction accuracy even under the worst-case noise process. Both online
algorithms require no training phases, and hence are robust to the time-varying
user demands. The underlying causes of estimation errors of both algorithms are
numerically analyzed. Moreover, extensive experiments on real world dataset are
conducted to validate the applicability of the proposed algorithms. It is
demonstrated that those algorithms can be applied to scenarios with different
noise features, and are able to make adaptive caching decisions, achieving
content hit rate that is comparable to that via the hindsight optimal strategy.Comment: to appear in IEEE Trans. Multimedi
- …