244 research outputs found
Fuzzy Maximum Satisfiability
In this paper, we extend the Maximum Satisfiability (MaxSAT) problem to
{\L}ukasiewicz logic. The MaxSAT problem for a set of formulae {\Phi} is the
problem of finding an assignment to the variables in {\Phi} that satisfies the
maximum number of formulae. Three possible solutions (encodings) are proposed
to the new problem: (1) Disjunctive Linear Relations (DLRs), (2) Mixed Integer
Linear Programming (MILP) and (3) Weighted Constraint Satisfaction Problem
(WCSP). Like its Boolean counterpart, the extended fuzzy MaxSAT will have
numerous applications in optimization problems that involve vagueness.Comment: 10 page
Learning to Generate Genotypes with Neural Networks
Neural networks and evolutionary computation have a rich intertwined history. They most commonly appear together when an evolutionary algorithm optimises the parameters and topology of a neural network for reinforcement learning problems, or when a neural network is applied as a surrogate fitness function to aid the evolutionary optimisation of expensive fitness functions. In this paper we take a different approach, asking the question of whether a neural network can be used to provide a mutation distribution for an evolutionary algorithm, and what advantages this approach may offer? Two modern neural network models are investigated, a Denoising Autoencoder modified to produce stochastic outputs and the Neural Autoregressive Distribution Estimator. Results show that the neural network approach to learning genotypes is able to solve many difficult discrete problems, such as MaxSat and HIFF, and regularly outperforms other evolutionary techniques
Special issue on logics and artificial intelligence
There is a significant range of ongoing challenges in artificial intelligence (AI) dealing with reasoning, planning, learning, perception and cognition, among others. In this scenario, many-valued logics emerge as one of the topics in many of the solutions to some of those AI problems. This special issue presents a brief introduction to the relation between logics and AI and collects recent research works on logic-based approaches in AI
IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules
The wide adoption of machine learning in the critical domains such as medical
diagnosis, law, education had propelled the need for interpretable techniques
due to the need for end users to understand the reasoning behind decisions due
to learning systems. The computational intractability of interpretable learning
led practitioners to design heuristic techniques, which fail to provide sound
handles to tradeoff accuracy and interpretability.
Motivated by the success of MaxSAT solvers over the past decade, recently
MaxSAT-based approach, called MLIC, was proposed that seeks to reduce the
problem of learning interpretable rules expressed in Conjunctive Normal Form
(CNF) to a MaxSAT query. While MLIC was shown to achieve accuracy similar to
that of other state of the art black-box classifiers while generating small
interpretable CNF formulas, the runtime performance of MLIC is significantly
lagging and renders approach unusable in practice. In this context, authors
raised the question: Is it possible to achieve the best of both worlds, i.e., a
sound framework for interpretable learning that can take advantage of MaxSAT
solvers while scaling to real-world instances?
In this paper, we take a step towards answering the above question in
affirmation. We propose IMLI: an incremental approach to MaxSAT based framework
that achieves scalable runtime performance via partition-based training
methodology. Extensive experiments on benchmarks arising from UCI repository
demonstrate that IMLI achieves up to three orders of magnitude runtime
improvement without loss of accuracy and interpretability.Comment: 10 pages, published in the proceedings of AAAI/ACM Conference on AI,
Ethics, and Society (AIES 2019
ASlib: A Benchmark Library for Algorithm Selection
The task of algorithm selection involves choosing an algorithm from a set of
algorithms on a per-instance basis in order to exploit the varying performance
of algorithms over a set of instances. The algorithm selection problem is
attracting increasing attention from researchers and practitioners in AI. Years
of fruitful applications in a number of domains have resulted in a large amount
of data, but the community lacks a standard format or repository for this data.
This situation makes it difficult to share and compare different approaches
effectively, as is done in other, more established fields. It also
unnecessarily hinders new researchers who want to work in this area. To address
this problem, we introduce a standardized format for representing algorithm
selection scenarios and a repository that contains a growing number of data
sets from the literature. Our format has been designed to be able to express a
wide variety of different scenarios. Demonstrating the breadth and power of our
platform, we describe a set of example experiments that build and evaluate
algorithm selection models through a common interface. The results display the
potential of algorithm selection to achieve significant performance
improvements across a broad range of problems and algorithms.Comment: Accepted to be published in Artificial Intelligence Journa
Momentum-inspired Low-Rank Coordinate Descent for Diagonally Constrained SDPs
We present a novel, practical, and provable approach for solving diagonally
constrained semi-definite programming (SDP) problems at scale using accelerated
non-convex programming. Our algorithm non-trivially combines acceleration
motions from convex optimization with coordinate power iteration and matrix
factorization techniques. The algorithm is extremely simple to implement, and
adds only a single extra hyperparameter -- momentum. We prove that our method
admits local linear convergence in the neighborhood of the optimum and always
converges to a first-order critical point. Experimentally, we showcase the
merits of our method on three major application domains: MaxCut, MaxSAT, and
MIMO signal detection. In all cases, our methodology provides significant
speedups over non-convex and convex SDP solvers -- 5X faster than
state-of-the-art non-convex solvers, and 9 to 10^3 X faster than convex SDP
solvers -- with comparable or improved solution quality.Comment: 10 pages, 8 figures, preprint under revie
- …