319,096 research outputs found
A Framework to Synergize Partial Order Reduction with State Interpolation
We address the problem of reasoning about interleavings in safety
verification of concurrent programs. In the literature, there are two prominent
techniques for pruning the search space. First, there are well-investigated
trace-based methods, collectively known as "Partial Order Reduction (POR)",
which operate by weakening the concept of a trace by abstracting the total
order of its transitions into a partial order. Second, there is state-based
interpolation where a collection of formulas can be generalized by taking into
account the property to be verified. Our main contribution is a framework that
synergistically combines POR with state interpolation so that the sum is more
than its parts
Backdoors to Normality for Disjunctive Logic Programs
Over the last two decades, propositional satisfiability (SAT) has become one
of the most successful and widely applied techniques for the solution of
NP-complete problems. The aim of this paper is to investigate theoretically how
Sat can be utilized for the efficient solution of problems that are harder than
NP or co-NP. In particular, we consider the fundamental reasoning problems in
propositional disjunctive answer set programming (ASP), Brave Reasoning and
Skeptical Reasoning, which ask whether a given atom is contained in at least
one or in all answer sets, respectively. Both problems are located at the
second level of the Polynomial Hierarchy and thus assumed to be harder than NP
or co-NP. One cannot transform these two reasoning problems into SAT in
polynomial time, unless the Polynomial Hierarchy collapses. We show that
certain structural aspects of disjunctive logic programs can be utilized to
break through this complexity barrier, using new techniques from Parameterized
Complexity. In particular, we exhibit transformations from Brave and Skeptical
Reasoning to SAT that run in time O(2^k n^2) where k is a structural parameter
of the instance and n the input size. In other words, the reduction is
fixed-parameter tractable for parameter k. As the parameter k we take the size
of a smallest backdoor with respect to the class of normal (i.e.,
disjunction-free) programs. Such a backdoor is a set of atoms that when deleted
makes the program normal. In consequence, the combinatorial explosion, which is
expected when transforming a problem from the second level of the Polynomial
Hierarchy to the first level, can now be confined to the parameter k, while the
running time of the reduction is polynomial in the input size n, where the
order of the polynomial is independent of k.Comment: A short version will appear in the Proceedings of the Proceedings of
the 27th AAAI Conference on Artificial Intelligence (AAAI'13). A preliminary
version of the paper was presented on the workshop Answer Set Programming and
Other Computing Paradigms (ASPOCP 2012), 5th International Workshop,
September 4, 2012, Budapest, Hungar
A Subsampling Line-Search Method with Second-Order Results
In many contemporary optimization problems such as those arising in machine
learning, it can be computationally challenging or even infeasible to evaluate
an entire function or its derivatives. This motivates the use of stochastic
algorithms that sample problem data, which can jeopardize the guarantees
obtained through classical globalization techniques in optimization such as a
trust region or a line search. Using subsampled function values is particularly
challenging for the latter strategy, which relies upon multiple evaluations. On
top of that all, there has been an increasing interest for nonconvex
formulations of data-related problems, such as training deep learning models.
For such instances, one aims at developing methods that converge to
second-order stationary points quickly, i.e., escape saddle points efficiently.
This is particularly delicate to ensure when one only accesses subsampled
approximations of the objective and its derivatives.
In this paper, we describe a stochastic algorithm based on negative curvature
and Newton-type directions that are computed for a subsampling model of the
objective. A line-search technique is used to enforce suitable decrease for
this model, and for a sufficiently large sample, a similar amount of reduction
holds for the true objective. By using probabilistic reasoning, we can then
obtain worst-case complexity guarantees for our framework, leading us to
discuss appropriate notions of stationarity in a subsampling context. Our
analysis encompasses the deterministic regime, and allows us to identify
sampling requirements for second-order line-search paradigms. As we illustrate
through real data experiments, these worst-case estimates need not be satisfied
for our method to be competitive with first-order strategies in practice
Case-based maintenance : Structuring and incrementing the Case.
International audienceTo avoid performance degradation and maintain the quality of results obtained by the case-based reasoning (CBR) systems, maintenance becomes necessary, especially for those systems designed to operate over long periods and which must handle large numbers of cases. CBR systems cannot be preserved without scanning the case base. For this reason, the latter must undergo maintenance operations.The techniques of case baseâs dimension optimization is the analog of instance reduction size methodology (in the machine learning community). This study links these techniques by presenting case-based maintenance in the framework of instance based reduction, and provides: first an overview of CBM studies, second, a novel method of structuring and updating the case base and finally an application of industrial case is presented.The structuring combines a categorization algorithm with a measure of competence CM based on competence and performance criteria. Since the case base must progress over time through the addition of new cases, an auto-increment algorithm is installed in order to dynamically ensure the structuring and the quality of a case base. The proposed method was evaluated through a case base from an industrial plant. In addition, an experimental study of the competence and the performance was undertaken on reference benchmarks. This study showed that the proposed method gives better results than the best methods currently found in the literature
Anytime Computation of Cautious Consequences in Answer Set Programming
Query answering in Answer Set Programming (ASP) is usually solved by
computing (a subset of) the cautious consequences of a logic program. This task
is computationally very hard, and there are programs for which computing
cautious consequences is not viable in reasonable time. However, current ASP
solvers produce the (whole) set of cautious consequences only at the end of
their computation. This paper reports on strategies for computing cautious
consequences, also introducing anytime algorithms able to produce sound answers
during the computation.Comment: To appear in Theory and Practice of Logic Programmin
Constraining Montague Grammar for computational applications
This work develops efficient methods for the implementation of Montague Grammar on
a computer. It covers both the syntactic and the semantic aspects of that task. Using a
simplified but adequate version of Montague Grammar it is shown how to translate from
an English fragment to a purely extensional first-order language which can then be made
amenable to standard automatic theorem-proving techniques.
Translating a sentence of Montague English into the first-order predicate calculus
usually proceeds via an intermediate translation in the typed lambda calculus which is
then simplified by lambda-reduction to obtain a first-order equivalent. If sufficient sortal
structure underlies the type theory for the reduced translation to always be a first-order
one then perhaps it should be directly constructed during the syntactic analysis of the
sentence so that the lambda-expressions never come into existence and no further
processing is necessary. A method is proposed to achieve this involving the unification
of meta-logical expressions which flesh out the type symbols of Montague's type theory
with first-order schemas.
It is then shown how to implement Montague Semantics without using a theorem prover
for type theory. Nothing more than a theorem prover for the first-order predicate
calculus is required. The first-order system can be used directly without encoding the
whole of type theory. It is only necessary to encode a part of second-order logic and
this can be done in an efficient, succinct, and readable manner. Furthermore the
pseudo-second-order terms need never appear in any translations provided by the parser.
They are vital just when higher-order reasoning must be simulated.
The foundation of this approach is its five-sorted theory of Montague Semantics. The
objects in this theory are entities, indices, propositions, properties, and quantities. It is a
theory which can be expressed in the language of first-order logic by means of axiom
schemas and there is a finite second-order axiomatisation which is the basis for the
theorem-proving arrangement. It can be viewed as a very constrained set theory
- âŠ