4,402 research outputs found

    The Potential of Restarts for ProbSAT

    Full text link
    This work analyses the potential of restarts for probSAT, a quite successful algorithm for k-SAT, by estimating its runtime distributions on random 3-SAT instances that are close to the phase transition. We estimate an optimal restart time from empirical data, reaching a potential speedup factor of 1.39. Calculating restart times from fitted probability distributions reduces this factor to a maximum of 1.30. A spin-off result is that the Weibull distribution approximates the runtime distribution for over 93% of the used instances well. A machine learning pipeline is presented to compute a restart time for a fixed-cutoff strategy to exploit this potential. The main components of the pipeline are a random forest for determining the distribution type and a neural network for the distribution's parameters. ProbSAT performs statistically significantly better than Luby's restart strategy and the policy without restarts when using the presented approach. The structure is particularly advantageous on hard problems.Comment: Eurocast 201

    Genetic Algorithm for Restricted Maximum k-Satisfiability in the Hopfield Network

    Get PDF
    The restricted Maximum k-Satisfiability MAX- kSAT is an enhanced Boolean satisfiability counterpart that has attracted numerous amount of research. Genetic algorithm has been the prominent optimization heuristic algorithm to solve constraint optimization problem. The core motivation of this paper is to introduce Hopfield network incorporated with genetic algorithm in solving MAX-kSAT problem. Genetic algorithm will be integrated with Hopfield network as a single network. The proposed method will be compared with the conventional Hopfield network. The results demonstrate that Hopfield network with genetic algorithm outperforms conventional Hopfield networks. Furthermore, the outcome had provided a solid evidence of the robustness of our proposed algorithms to be used in other satisfiability problem

    Compositional Vector Space Models for Knowledge Base Completion

    Full text link
    Knowledge base (KB) completion adds new facts to a KB by making inferences from existing facts, for example by inferring with high likelihood nationality(X,Y) from bornIn(X,Y). Most previous methods infer simple one-hop relational synonyms like this, or use as evidence a multi-hop relational path treated as an atomic feature, like bornIn(X,Z) -> containedIn(Z,Y). This paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically, composing the implications of a path using a recursive neural network (RNN) that takes as inputs vector embeddings of the binary relation in the path. Not only does this allow us to generalize to paths unseen at training time, but also, with a single high-capacity RNN, to predict new relation types not seen when the compositional model was trained (zero-shot learning). We assemble a new dataset of over 52M relational triples, and show that our method improves over a traditional classifier by 11%, and a method leveraging pre-trained embeddings by 7%.Comment: The 53rd Annual Meeting of the Association for Computational Linguistics and The 7th International Joint Conference of the Asian Federation of Natural Language Processing, 201

    Maximum 2-satisfiability in radial basis function neural network

    Get PDF
    Maximum k-Satisfiability (MAX-kSAT) is the logic to determine the maximum number of satisfied clauses. Correctly, this logic plays a prominent role in numerous applications as a combinatorial optimization logic. MAX2SAT is a case of MAX-kSAT and is written in Conjunctive Normal Form (CNF) with two variables in each clause. This paper presents a new paradigm in using MAX2SAT by implementing in Radial Basis Function Neural Network (RBFNN). Hence, we restrict the analysis to MAX2SAT clauses. We utilize Dev C++ as the platform of training and testing our proposed algorithm. In this study, the effectiveness of RBFNN-MAX2SAT can be estimated by evaluating the proposed models with testing data sets. The results obtained are analysed using the ratio of satisfied clause (RSC), the root means square error (RMSE), and CPU time. The simulated results suggest that the proposed algorithm is effective in doing MAX2SAT logic programming by analysing the performance by obtaining lower Root Mean Square Error, high ratio of satisfied clauses and lesser CPU time

    Maximum Weight Matching via Max-Product Belief Propagation

    Full text link
    Max-product "belief propagation" is an iterative, local, message-passing algorithm for finding the maximum a posteriori (MAP) assignment of a discrete probability distribution specified by a graphical model. Despite the spectacular success of the algorithm in many application areas such as iterative decoding, computer vision and combinatorial optimization which involve graphs with many cycles, theoretical results about both correctness and convergence of the algorithm are known in few cases (Weiss-Freeman Wainwright, Yeddidia-Weiss-Freeman, Richardson-Urbanke}. In this paper we consider the problem of finding the Maximum Weight Matching (MWM) in a weighted complete bipartite graph. We define a probability distribution on the bipartite graph whose MAP assignment corresponds to the MWM. We use the max-product algorithm for finding the MAP of this distribution or equivalently, the MWM on the bipartite graph. Even though the underlying bipartite graph has many short cycles, we find that surprisingly, the max-product algorithm always converges to the correct MAP assignment as long as the MAP assignment is unique. We provide a bound on the number of iterations required by the algorithm and evaluate the computational cost of the algorithm. We find that for a graph of size nn, the computational cost of the algorithm scales as O(n3)O(n^3), which is the same as the computational cost of the best known algorithm. Finally, we establish the precise relation between the max-product algorithm and the celebrated {\em auction} algorithm proposed by Bertsekas. This suggests possible connections between dual algorithm and max-product algorithm for discrete optimization problems.Comment: In the proceedings of the 2005 IEEE International Symposium on Information Theor

    Extracting Rules from Neural Networks with Partial Interpretations

    Get PDF
    We investigate the problem of extracting rules, expressed in Horn logic, from neural network models. Our work is based on the exact learning model, in which a learner interacts with a teacher (the neural network model) via queries in order to learn an abstract target concept, which in our case is a set of Horn rules. We consider partial interpretations to formulate the queries. These can be understood as a representation of the world where part of the knowledge regarding the truthiness of propositions is unknown. We employ Angluin s algorithm for learning Horn rules via queries and evaluate our strategy empirically
    • …
    corecore