170 research outputs found
Few-Shot Semantic Relation Prediction across Heterogeneous Graphs
Semantic relation prediction aims to mine the implicit relationships between
objects in heterogeneous graphs, which consist of different types of objects
and different types of links. In real-world scenarios, new semantic relations
constantly emerge and they typically appear with only a few labeled data. Since
a variety of semantic relations exist in multiple heterogeneous graphs, the
transferable knowledge can be mined from some existing semantic relations to
help predict the new semantic relations with few labeled data. This inspires a
novel problem of few-shot semantic relation prediction across heterogeneous
graphs. However, the existing methods cannot solve this problem because they
not only require a large number of labeled samples as input, but also focus on
a single graph with a fixed heterogeneity. Targeting this novel and challenging
problem, in this paper, we propose a Meta-learning based Graph neural network
for Semantic relation prediction, named MetaGS. Firstly, MetaGS decomposes the
graph structure between objects into multiple normalized subgraphs, then adopts
a two-view graph neural network to capture local heterogeneous information and
global structure information of these subgraphs. Secondly, MetaGS aggregates
the information of these subgraphs with a hyper-prototypical network, which can
learn from existing semantic relations and adapt to new semantic relations.
Thirdly, using the well-initialized two-view graph neural network and
hyper-prototypical network, MetaGS can effectively learn new semantic relations
from different graphs while overcoming the limitation of few labeled data.
Extensive experiments on three real-world datasets have demonstrated the
superior performance of MetaGS over the state-of-the-art methods
When Queueing Meets Coding: Optimal-Latency Data Retrieving Scheme in Storage Clouds
In this paper, we study the problem of reducing the delay of downloading data
from cloud storage systems by leveraging multiple parallel threads, assuming
that the data has been encoded and stored in the clouds using fixed rate
forward error correction (FEC) codes with parameters (n, k). That is, each file
is divided into k equal-sized chunks, which are then expanded into n chunks
such that any k chunks out of the n are sufficient to successfully restore the
original file. The model can be depicted as a multiple-server queue with
arrivals of data retrieving requests and a server corresponding to a thread.
However, this is not a typical queueing model because a server can terminate
its operation, depending on when other servers complete their service (due to
the redundancy that is spread across the threads). Hence, to the best of our
knowledge, the analysis of this queueing model remains quite uncharted.
Recent traces from Amazon S3 show that the time to retrieve a fixed size
chunk is random and can be approximated as a constant delay plus an i.i.d.
exponentially distributed random variable. For the tractability of the
theoretical analysis, we assume that the chunk downloading time is i.i.d.
exponentially distributed. Under this assumption, we show that any
work-conserving scheme is delay-optimal among all on-line scheduling schemes
when k = 1. When k > 1, we find that a simple greedy scheme, which allocates
all available threads to the head of line request, is delay optimal among all
on-line scheduling schemes. We also provide some numerical results that point
to the limitations of the exponential assumption, and suggest further research
directions.Comment: Original accepted by IEEE Infocom 2014, 9 pages. Some statements in
the Infocom paper are correcte
An efficient contradiction separation based automated deduction algorithm for enhancing reasoning capability
Automated theorem prover (ATP) for first-order logic (FOL), as a significant inference engine, is one of the hot research areas in the field of knowledge representation and automated reasoning. E prover, as one of the leading ATPs, has made a significant contribution to the development of theorem provers for FOL, particularly equality handling, after more than two decades of development. However, there are still a large number of problems in the TPTP problem library, the benchmark problem library for ATPs, that E has yet to solve. The standard contradiction separation (S-CS) rule is an inference method introduced recently that can handle multiple clauses in a synergized way and has a few distinctive features which complements to the calculus of E. Binary clauses, on the other hand, are widely utilized in the automated deduction process for FOL because they have a minimal number of literals (typically only two literals), few symbols, and high manipulability. As a result, it is feasible to improve a prover's deduction capability by reusing binary clause. In this paper, a binary clause reusing algorithm based on the S-CS rule is firstly proposed, which is then incorporated into E with the objective to enhance E’s performance, resulting in an extended E prover. According to experimental findings, the performance of the extended E prover not only outperforms E itself in a variety of aspects, but also solves 18 problems with rating of 1 in the TPTP library, meaning that none of the existing ATPs are able to resolve them
Frequency Enhanced Hybrid Attention Network for Sequential Recommendation
The self-attention mechanism, which equips with a strong capability of
modeling long-range dependencies, is one of the extensively used techniques in
the sequential recommendation field. However, many recent studies represent
that current self-attention based models are low-pass filters and are
inadequate to capture high-frequency information. Furthermore, since the items
in the user behaviors are intertwined with each other, these models are
incomplete to distinguish the inherent periodicity obscured in the time domain.
In this work, we shift the perspective to the frequency domain, and propose a
novel Frequency Enhanced Hybrid Attention Network for Sequential
Recommendation, namely FEARec. In this model, we firstly improve the original
time domain self-attention in the frequency domain with a ramp structure to
make both low-frequency and high-frequency information could be explicitly
learned in our approach. Moreover, we additionally design a similar attention
mechanism via auto-correlation in the frequency domain to capture the periodic
characteristics and fuse the time and frequency level attention in a union
model. Finally, both contrastive learning and frequency regularization are
utilized to ensure that multiple views are aligned in both the time domain and
frequency domain. Extensive experiments conducted on four widely used benchmark
datasets demonstrate that the proposed model performs significantly better than
the state-of-the-art approaches.Comment: 11 pages, 7 figures, The 46th International ACM SIGIR Conference on
Research and Development in Information Retrieva
Recommended from our members
Improving probability selection based weights for satisfiability problems
Boolean Satisfiability problem (SAT) plays a prominent role in many domains of computer science and artificial intelligence due to its significant importance in both theory and applications. Algorithms for solving SAT problems can be categorized into two main classes: complete algorithms and incomplete algorithms (typically stochastic local search (SLS) algorithms). SLS algorithms are among the most effective for solving uniform random SAT problems, while hybrid algorithms achieved great breakthroughs for solving hard random SAT (HRS) problem recently. However, there is a lack of algorithms that can effectively solve both uniform random SAT and HRS problems. In this paper, a new SLS algorithm named SelectNTS is proposed aiming at solving both uniform random SAT and HRS problem effectively. SelectNTS is essentially an improved probability selection based local search algorithm, the core of which includes new clause and variable selection heuristics: a new clause weighting scheme and a biased random walk strategy are utilized to select a clause, while a new probability selection strategy with the variation of configuration checking strategy is used to select a variable. Extensive experimental results show that SelectNTS outperforms the state-of-the-art random SAT algorithms and hybrid algorithms in solving both uniform random SAT and HRS problems effectively
Improving Two-Mode Algorithm via Probabilistic Selection for Solving Satisfiability Problem
The satisfiability problem (SAT) is a critically important issue in multiple branches of computer science and artificial intelligence, with its relevance in industrial applications being of particular Significance CCAnr is the current leading stochastic local search (SLS) solver for tackling crafted satisfiable instances. It uses a two-mode strategy, greedy mode and diversification mode. In the present work, we employ a probabilistic selection approach to enhance CCAnr, leading to a new algorithm called ProbCCAnr. Experiments are carried out using the random SAT benchmarks and structured SAT benchmarks including instances encoded from mathematical problems and application problems. The experiments demonstrate that ProbCCAnr significantly improves the performance of state-of-the-art SLS algorithms including CCAnr and ProbSAT, among others. Moreover, ProbCCAnr shows better performance than state of the art complete solvers
- …