4,329 research outputs found

    Automatic Abstraction for Congruences

    Get PDF
    One approach to verifying bit-twiddling algorithms is to derive invariants between the bits that constitute the variables of a program. Such invariants can often be described with systems of congruences where in each equation c⃗⋅x⃗=dmod  m\vec{c} \cdot \vec{x} = d \mod m, (unknown variable m)isapoweroftwo, is a power of two, \vec{c}isavectorofintegercoefficients,and is a vector of integer coefficients, and \vec{x}$ is a vector of propositional variables (bits). Because of the low-level nature of these invariants and the large number of bits that are involved, it is important that the transfer functions can be derived automatically. We address this problem, showing how an analysis for bit-level congruence relationships can be decoupled into two parts: (1) a SAT-based abstraction (compilation) step which can be automated, and (2) an interpretation step that requires no SAT-solving. We exploit triangular matrix forms to derive transfer functions efficiently, even in the presence of large numbers of bits. Finally we propose program transformations that improve the analysis results

    Computing Tournament Solutions using Relation Algebra and REL VIEW

    Get PDF
    We describe a simple computing technique for the tournament choice problem. It rests upon a relational modeling and uses the BDD-based computer system RelView for the evaluation of the relation-algebraic expressions that specify the solutions and for the visualization of the computed results. The Copeland set can immediately be identified using RelView's labeling feature. Relation-algebraic specifications of the Condorcet non-losers, the Schwartz set, the top cycle, the uncovered set, the minimal covering set, the Banks set, and the tournament equilibrium set are delivered. We present an example of a tournament on a small set of alternatives, for which the above choice sets are computed and visualized via RelView. The technique described in this paper is very flexible and especially appropriate for prototyping and experimentation, and as such very instructive for educational purposes. It can easily be applied to other problems of social choice and game theory.Tournament, relational algebra, RelView, Copeland set, Condorcet non-losers, Schwartz set, top cycle, uncovered set, minimal covering set, Banks set, tournament equilibrium set.

    Inference as a data management problem

    Get PDF
    Inference over OWL ontologies with large A-Boxes has been researched as a data management problem in recent years. This work adopts the strategy of applying a tableaux-based reasoner for complete T-Box classification, and using a rule-based mechanism for scalable A-Box reasoning. Specifically, we establish for the classified T-Box an inference framework, which can be used to compute and materialise inference results. The inference we focus on is type inference in A-Box reasoning, which we define as the process of deriving for each A-Box instance its memberships of OWL classes and properties. As our approach materialises the inference results, it in general provides faster query processing than non-materialising techniques, at the expense of larger space requirement and slower update speed. When the A-Box size is suitable for an RDBMS, we compile the inference framework to triggers, which incrementally update the inference materialisation from both data inserts and data deletes, without needing to re-compute the whole inference. More importantly, triggers make inference available as atomic consequences of inserts or deletes, which preserves the ACID properties of transactions, and such inference is known as transactional reasoning. When the A-Box size is beyond the capability of an RDBMS, we then compile the inference framework to Spark programmes, which provide scalable inference materialisation in a Big Data system, and our evaluation considers up to reasoning 270 million A-Box facts. Evaluating our work, and comparing with two state-of-the-art reasoners, we empirically verify that our approach is able to perform scalable inference materialisation, and to provide faster query processing with comparable completeness of reasoning.Open Acces

    Predictability: a way to characterize Complexity

    Full text link
    Different aspects of the predictability problem in dynamical systems are reviewed. The deep relation among Lyapunov exponents, Kolmogorov-Sinai entropy, Shannon entropy and algorithmic complexity is discussed. In particular, we emphasize how a characterization of the unpredictability of a system gives a measure of its complexity. Adopting this point of view, we review some developments in the characterization of the predictability of systems showing different kind of complexity: from low-dimensional systems to high-dimensional ones with spatio-temporal chaos and to fully developed turbulence. A special attention is devoted to finite-time and finite-resolution effects on predictability, which can be accounted with suitable generalization of the standard indicators. The problems involved in systems with intrinsic randomness is discussed, with emphasis on the important problems of distinguishing chaos from noise and of modeling the system. The characterization of irregular behavior in systems with discrete phase space is also considered.Comment: 142 Latex pgs. 41 included eps figures, submitted to Physics Reports. Related information at this http://axtnt2.phys.uniroma1.i

    Compositional Mining of Multi-Relational Biological Datasets

    Get PDF
    High-throughput biological screens are yielding ever-growing streams of information about multiple aspects of cellular activity. As more and more categories of datasets come online, there is a corresponding multitude of ways in which inferences can be chained across them, motivating the need for compositional data mining algorithms. In this paper, we argue that such compositional data mining can be effectively realized by functionally cascading redescription mining and biclustering algorithms as primitives. Both these primitives mirror shifts of vocabulary that can be composed in arbitrary ways to create rich chains of inferences. Given a relational database and its schema, we show how the schema can be automatically compiled into a compositional data mining program, and how different domains in the schema can be related through logical sequences of biclustering and redescription invocations. This feature allows us to rapidly prototype new data mining applications, yielding greater understanding of scientific datasets. We describe two applications of compositional data mining: (i) matching terms across categories of the Gene Ontology and (ii) understanding the molecular mechanisms underlying stress response in human cells
    • …
    corecore