942 research outputs found

    Logic Programming for Finding Models in the Logics of Knowledge and its Applications: A Case Study

    Full text link
    The logics of knowledge are modal logics that have been shown to be effective in representing and reasoning about knowledge in multi-agent domains. Relatively few computational frameworks for dealing with computation of models and useful transformations in logics of knowledge (e.g., to support multi-agent planning with knowledge actions and degrees of visibility) have been proposed. This paper explores the use of logic programming (LP) to encode interesting forms of logics of knowledge and compute Kripke models. The LP modeling is expanded with useful operators on Kripke structures, to support multi-agent planning in the presence of both world-altering and knowledge actions. This results in the first ever implementation of a planner for this type of complex multi-agent domains.Comment: 16 pages, 1 figure, International Conference on Logic Programming 201

    Sacrificing Accuracy for Reduced Computation: Cascaded Inference Based on Softmax Confidence

    Full text link
    We study the tradeoff between computational effort and accuracy in a cascade of deep neural networks. During inference, early termination in the cascade is controlled by confidence levels derived directly from the softmax outputs of intermediate classifiers. The advantage of early termination is that classification is performed using less computation, thus adjusting the computational effort to the complexity of the input. Moreover, dynamic modification of confidence thresholds allow one to trade accuracy for computational effort without requiring retraining. Basing of early termination on softmax classifier outputs is justified by experimentation that demonstrates an almost linear relation between confidence levels in intermediate classifiers and accuracy. Our experimentation with architectures based on ResNet obtained the following results. (i) A speedup of 1.5 that sacrifices 1.4% accuracy with respect to the CIFAR-10 test set. (ii) A speedup of 1.19 that sacrifices 0.7% accuracy with respect to the CIFAR-100 test set. (iii) A speedup of 2.16 that sacrifices 1.4% accuracy with respect to the SVHN test set

    Composition with Target Constraints

    Full text link
    It is known that the composition of schema mappings, each specified by source-to-target tgds (st-tgds), can be specified by a second-order tgd (SO tgd). We consider the question of what happens when target constraints are allowed. Specifically, we consider the question of specifying the composition of standard schema mappings (those specified by st-tgds, target egds, and a weakly acyclic set of target tgds). We show that SO tgds, even with the assistance of arbitrary source constraints and target constraints, cannot specify in general the composition of two standard schema mappings. Therefore, we introduce source-to-target second-order dependencies (st-SO dependencies), which are similar to SO tgds, but allow equations in the conclusion. We show that st-SO dependencies (along with target egds and target tgds) are sufficient to express the composition of every finite sequence of standard schema mappings, and further, every st-SO dependency specifies such a composition. In addition to this expressive power, we show that st-SO dependencies enjoy other desirable properties. In particular, they have a polynomial-time chase that generates a universal solution. This universal solution can be used to find the certain answers to unions of conjunctive queries in polynomial time. It is easy to show that the composition of an arbitrary number of standard schema mappings is equivalent to the composition of only two standard schema mappings. We show that surprisingly, the analogous result holds also for schema mappings specified by just st-tgds (no target constraints). This is proven by showing that every SO tgd is equivalent to an unnested SO tgd (one where there is no nesting of function symbols). Similarly, we prove unnesting results for st-SO dependencies, with the same types of consequences.Comment: This paper is an extended version of: M. Arenas, R. Fagin, and A. Nash. Composition with Target Constraints. In 13th International Conference on Database Theory (ICDT), pages 129-142, 201

    Performance Evaluation and Optimization of Math-Similarity Search

    Full text link
    Similarity search in math is to find mathematical expressions that are similar to a user's query. We conceptualized the similarity factors between mathematical expressions, and proposed an approach to math similarity search (MSS) by defining metrics based on those similarity factors [11]. Our preliminary implementation indicated the advantage of MSS compared to non-similarity based search. In order to more effectively and efficiently search similar math expressions, MSS is further optimized. This paper focuses on performance evaluation and optimization of MSS. Our results show that the proposed optimization process significantly improved the performance of MSS with respect to both relevance ranking and recall.Comment: 15 pages, 8 figure

    Probabilistic Algorithmic Knowledge

    Full text link
    The framework of algorithmic knowledge assumes that agents use deterministic knowledge algorithms to compute the facts they explicitly know. We extend the framework to allow for randomized knowledge algorithms. We then characterize the information provided by a randomized knowledge algorithm when its answers have some probability of being incorrect. We formalize this information in terms of evidence; a randomized knowledge algorithm returning ``Yes'' to a query about a fact \phi provides evidence for \phi being true. Finally, we discuss the extent to which this evidence can be used as a basis for decisions.Comment: 26 pages. A preliminary version appeared in Proc. 9th Conference on Theoretical Aspects of Rationality and Knowledge (TARK'03

    К вопросу об оценке противокоррозионной эффективности ингибиторов атмосферной коррозии

    Get PDF
    Розробка, дослідження захисних антикорозійних властивостей і визначення механізму дії інгібіторів атмосферної корозії, призначених для захисту металу з тонкими шарами іржі, потребує проведення натурних та прискорених корозійних випробувань. Оскільки у більшості випадків цей процес довготривалий, то для швидкого визначення антикорозійної ефективності інгібіторів корозії розроблена методика їх прискорених випробувань. Методика полягає у визначенні захисних властивостей інгібітору шляхом зняття поляризаційних кривих у нейтральному середовищі на металі з продуктами атмосферної корозії та захисною плівкою.Development, research of protective anticorrosive properties and determination of mechanism of action of atmospheric corrosion inhibitors for the protection of metal with thin layers of rust demands carrying out of the natural and accelerated corrosion tests. As in most cases this process long, for rapid determination of anticorrosive efficiency of corrosion inhibitors the new method of their accelerated tests is developed. A method consists in definition of protective ability by removal of polarization curves on a metal with the products of atmospheric corrosion and protective film in a neutral environment

    A paradigm for restenosis after angioplasty: clues for the development of new preventive therapies

    Get PDF
    Restenosis after intravascular intervention is one of the most important unsolved clinical and economic problems in the management of cardiovascular disease. Although neither its pathogenesis nor its prevention are yet defined, the early and late histologic appearance of the angioplasty state are known. Immediately after angioplasty, the atheroma has fissures, and the normal segment of the vessel circumference is stretched. There is substantial evidence of intimal injury. When restenosis develops at 1-4 months the histologic appearance of the restenotic lesion is intimal hyperplasia. Given this endpoint, we may theorize that the proximate cause of this response is denuding and stretching vascular injury. Since the healing response to tissue injury has been studied extensively, we can hypothesize the major milestones in the temporal sequence of restenosis are platelet aggregation, inflammatory cell infiltration, release of growth factors, medial smooth muscle cell modulation and proliferation, proteoglycan synthesis and extracellular matrix remodeling. At each of these steps, there are potential inhibitors. The resolution of the problem of restenosis may require both removal of atheroma mass and appropriate timing and effective delivery of inhibitors of intimal hyperplasia to the injury site in adequate concentration.Biomedical Reviews 1992; 1: 13-24

    How Many Topics? Stability Analysis for Topic Models

    Full text link
    Topic modeling refers to the task of discovering the underlying thematic structure in a text corpus, where the output is commonly presented as a report of the top terms appearing in each topic. Despite the diversity of topic modeling algorithms that have been proposed, a common challenge in successfully applying these techniques is the selection of an appropriate number of topics for a given corpus. Choosing too few topics will produce results that are overly broad, while choosing too many will result in the "over-clustering" of a corpus into many small, highly-similar topics. In this paper, we propose a term-centric stability analysis strategy to address this issue, the idea being that a model with an appropriate number of topics will be more robust to perturbations in the data. Using a topic modeling approach based on matrix factorization, evaluations performed on a range of corpora show that this strategy can successfully guide the model selection process.Comment: Improve readability of plots. Add minor clarification

    Randomisation and Derandomisation in Descriptive Complexity Theory

    Full text link
    We study probabilistic complexity classes and questions of derandomisation from a logical point of view. For each logic L we introduce a new logic BPL, bounded error probabilistic L, which is defined from L in a similar way as the complexity class BPP, bounded error probabilistic polynomial time, is defined from PTIME. Our main focus lies on questions of derandomisation, and we prove that there is a query which is definable in BPFO, the probabilistic version of first-order logic, but not in Cinf, finite variable infinitary logic with counting. This implies that many of the standard logics of finite model theory, like transitive closure logic and fixed-point logic, both with and without counting, cannot be derandomised. Similarly, we present a query on ordered structures which is definable in BPFO but not in monadic second-order logic, and a query on additive structures which is definable in BPFO but not in FO. The latter of these queries shows that certain uniform variants of AC0 (bounded-depth polynomial sized circuits) cannot be derandomised. These results are in contrast to the general belief that most standard complexity classes can be derandomised. Finally, we note that BPIFP+C, the probabilistic version of fixed-point logic with counting, captures the complexity class BPP, even on unordered structures
    corecore