151 research outputs found
A âBest-of-Breedâ approach for designing a fast algorithm for computing fixpoints of Galois Connections
The fixpoints of Galois Connections form patterns in binary relational data, such as object-attribute relations, that are important in a number of data analysis fields, including Formal Concept Analysis (FCA), Boolean factor analysis and frequent itemset mining. However, the large number of such fixpoints present in a typical dataset requires efficient computation to make analysis tractable, particularly since any particular fixpoint may be computed many times. Because they can be computed in a canonical order, testing the canonicity of fixpoints to avoid duplicates has proven to be a key factor in the design of efficient algorithms. The most efficient of these algorithms have been variants of the Close-By-One (CbO) algorithm. In this article, the algorithms CbO, FCbO, In-Close, In-Close2 and a new variant, In-Close3, are presented together for the first time, with in-Close2 and In-Close3 being the results of breeding In-Close with FCbO. To allow them to be easily compared, the algorithms are presented in the same style and notation. The important advances in CbO are described and compared graphically using a simple example. For the first time, the algorithms are implemented using the same structures and techniques to provide a level playing field for evaluation. Their performance is tested and compared using a range of data sets and the most important features identified for a CbO âBest-of-Breedâ. This article also presents, for the first time, the âpartial-closureâ canonicity test
Distributed Computation of Generalized One-Sided Concept Lattices on Sparse Data Tables
In this paper we present the study on the usage of distributed version of the algorithm for generalized one-sided concept lattices (GOSCL), which provides a special case for fuzzy version of data analysis approach called formal concept analysis (FCA). The methods of this type create the conceptual model of the input data based on the theory of concept lattices and were successfully applied in several domains. GOSCL is able to create one-sided concept lattices for data tables with different attribute types processed as fuzzy sets. One of the problems with the creation of FCA-based models is their computational complexity. In order to reduce the computation times, we have designed the distributed version of the algorithm for GOSCL. The algorithm is able to work well especially for data where the number of newly generated concepts is reduced, i.e., for sparse input data tables which are often used in domains like text-mining and information retrieval. Therefore, we present the experimental results on sparse data tables in order to show the applicability of the algorithm on the generated data and the selected text-mining datasets
Generalized Strong Preservation by Abstract Interpretation
Standard abstract model checking relies on abstract Kripke structures which
approximate concrete models by gluing together indistinguishable states, namely
by a partition of the concrete state space. Strong preservation for a
specification language L encodes the equivalence of concrete and abstract model
checking of formulas in L. We show how abstract interpretation can be used to
design abstract models that are more general than abstract Kripke structures.
Accordingly, strong preservation is generalized to abstract
interpretation-based models and precisely related to the concept of
completeness in abstract interpretation. The problem of minimally refining an
abstract model in order to make it strongly preserving for some language L can
be formulated as a minimal domain refinement in abstract interpretation in
order to get completeness w.r.t. the logical/temporal operators of L. It turns
out that this refined strongly preserving abstract model always exists and can
be characterized as a greatest fixed point. As a consequence, some well-known
behavioural equivalences, like bisimulation, simulation and stuttering, and
their corresponding partition refinement algorithms can be elegantly
characterized in abstract interpretation as completeness properties and
refinements
Contracts of Reactivity
We present a theory of contracts that is centered around reacting to failures and explore it from a general assume-guarantee perspective as well as from a concrete context of automated synthesis from linear temporal logic (LTL) specifications, all of which are compliant with a contract metatheory introduced by Benveniste et al. We also show how to obtain an automated procedure for synthesizing reactive assume-guarantee contracts and implementations that capture ideas like optimality and robustness based on assume-guarantee lattices computed from antitone Galois connection fixpoints. Lastly, we provide an example of a âreactive GR(1)â contract and a simulation of its implementation
Abstract Fixpoint Computations with Numerical Acceleration Methods
Static analysis by abstract interpretation aims at automatically proving
properties of computer programs. To do this, an over-approximation of program
semantics, defined as the least fixpoint of a system of semantic equations,
must be computed. To enforce the convergence of this computation, widening
operator is used but it may lead to coarse results. We propose a new method to
accelerate the computation of this fixpoint by using standard techniques of
numerical analysis. Our goal is to automatically and dynamically adapt the
widening operator in order to maintain precision
Decidability and Synthesis of Abstract Inductive Invariants
Decidability and synthesis of inductive invariants ranging in a given domain
play an important role in many software and hardware verification systems. We
consider here inductive invariants belonging to an abstract domain as
defined in abstract interpretation, namely, ensuring the existence of the best
approximation in of any system property. In this setting, we study the
decidability of the existence of abstract inductive invariants in of
transition systems and their corresponding algorithmic synthesis. Our model
relies on some general results which relate the existence of abstract inductive
invariants with least fixed points of best correct approximations in of the
transfer functions of transition systems and their completeness properties.
This approach allows us to derive decidability and synthesis results for
abstract inductive invariants which are applied to the well-known Kildall's
constant propagation and Karr's affine equalities abstract domains. Moreover,
we show that a recent general algorithm for synthesizing inductive invariants
in domains of logical formulae can be systematically derived from our results
and generalized to a range of algorithms for computing abstract inductive
invariants
A parallel version of the in-close algorithm
This research paper presents a new parallel algorithm for computing the formal concepts in a formal context. The proposed shared memory parallel algorithm Parallel-Task-In-Close3 parallelizes Andrews's In-Close3 serial algorithm. The paper presents the key parallelization strategy used and presents experimental results of the parallelization using the OpenMP framewor
A new method for inheriting canonicity test failures in Close-by-One type algorithms
Close-by-One type algorithms are effcient algorithms for
computing formal concepts. They use a mathematical canonicity test to
avoid the repeated computation of the same concept, which is far more
effcient than methods based on searching. Nevertheless, the canonicity
test is still the most labour intensive part of Close-by-One algorithms
and various means of avoiding the test have been devised, including the
ability to inherit test failures at the next level of recursion. This paper
presents a new method for inheriting canonicity test failures in Close-
by-One type algorithms. The new method is simpler than the existing
method and can be amalgamated with other algorithm features to further improve effciency. The paper recaps an existing algorithm that does
not feature test failure inheritance and an algorithm that features the
existing method. The paper then presents the new method and a new
algorithm that incorporates it. The three algorithms are implemented on
a `level playing field' with the same level of optimisation. Experiments
are carried out on the implemented algorithms, using a representative
range of data sets, to compare the number of inherited canonicity test
failures and the computation times. It is shown that the new algorithm,
incorporating the new method of inheriting canonicity test failures, gives
the best performance
- âŠ