10 research outputs found

    The semantics of N-soft sets, their applications, and a coda about three-way decision

    Get PDF
    This paper presents the first detailed analysis of the semantics of N-soft sets. The two benchmark semantics associated with soft sets are perfect fits for N-soft sets. We argue that N-soft sets allow for an utterly new interpretation in logical terms, whereby N-soft sets can be interpreted as a generalized form of incomplete soft sets. Applications include aggregation strategies for these settings. Finally, three-way decision models are designed with both a qualitative and a quantitative character. The first is based on the concepts of V-kernel, V-core and V-support. The second uses an extended form of cardinality that is reminiscent of the idea of scalar sigma-count as a proxy of the cardinality of a fuzzy set

    Ada Real-Time Performance Benchmarks for Personal Computer Environments

    Get PDF
    A set of benchmarks were developed to test the real-time performance of Ada Personal Computer (PC) compilers. The benchmark set measures the overhead associated with various functions, including subprogram calls both from within and outside of packages ( including generic), dynamic allocation and deallocation of objects, exceptions, task activation/termination, task rendezvous, various time related functions, common arithmetic functions, and file I/0. The benchmark set also determines the type of memory deallocation supported, and determines whether fixed-interval or pre-emptive delay task scheduling is used. The different benchmarks are described along with an explanation of the testing methods for each benchmark. Two PC compilers were then tested (JANUS/Ada and Meridian Adavantage) to demonstrate the benchmark programs, and the results of the test are discussed. Conclusions concerning the real-time abilities of the two tested compilers are also given

    Quantifying and Predicting the Influence of Execution Platform on Software Component Performance

    Get PDF
    The performance of software components depends on several factors, including the execution platform on which the software components run. To simplify cross-platform performance prediction in relocation and sizing scenarios, a novel approach is introduced in this thesis which separates the application performance profile from the platform performance profile. The approach is evaluated using transparent instrumentation of Java applications and with automated benchmarks for Java Virtual Machines

    Categorical Modelling of Logic Programming: Coalgebra, Functorial Semantics, String Diagrams

    Get PDF
    Logic programming (LP) is driven by the idea that logic subsumes computation. Over the past 50 years, along with the emergence of numerous logic systems, LP has also grown into a large family, the members of which are designed to deal with various computation scenarios. Among them, we focus on two of the most influential quantitative variants are probabilistic logic programming (PLP) and weighted logic programming (WLP). In this thesis, we investigate a uniform understanding of logic programming and its quan- titative variants from the perspective of category theory. In particular, we explore both a coalgebraic and an algebraic understanding of LP, PLP and WLP. On the coalgebraic side, we propose a goal-directed strategy for calculating the probabilities and weights of atoms in PLP and WLP programs, respectively. We then develop a coalgebraic semantics for PLP and WLP, built on existing coalgebraic semantics for LP. By choosing the appropriate functors representing probabilistic and weighted computation, such coalgeraic semantics characterise exactly the goal-directed behaviour of PLP and WLP programs. On the algebraic side, we define a functorial semantics of LP, PLP, and WLP, such that they three share the same syntactic categories of string diagrams, and differ regarding to the semantic categories according to their data/computation type. This allows for a uniform diagrammatic expression for certain semantic constructs. Moreover, based on similar approaches to Bayesian networks, this provides a framework to formalise the connection between PLP and Bayesian networks. Furthermore, we prove a sound and complete aximatization of the semantic category for LP, in terms of string diagrams. Together with the diagrammatic presentation of the fixed point semantics, one obtain a decidable calculus for proving the equivalence between propositional definite logic programs

    Fregeanism, sententialism, and scope

    Get PDF
    Among philosophers, Fregeanism and sententialism are widely considered two of the leading theories of the semantics of attitude reports. Among linguists, these approaches have received little recent sustained discussion. This paper aims to bridge this divide. I present a new formal implementation of Fregeanism and sententialism, with the goal of showing that these theories can be developed in sufficient detail and concreteness to be serious competitors to the theories which are more popular among semanticists. I develop a modern treatment of quantifying in for Fregeanism and sententialism, in the style of Heim and Kratzer [1998], and then show how these theories can – somewhat surprisingly – account for “third readings” (Fodor [1970]) on the model of the “Standard Solution” from possible-worlds semantics (von Fintel and Heim [2002]). The resulting Fregean/sententialist proposal has a distinctive attraction: it treats data related to counterfactual attitudes (Ninan [2008], Yanovich [2011], Maier [2015], Blumberg [2018]) – which have proven challenging to accommodate in the setting of possible worlds semantics – straightforwardly as third readings

    Quantifying and Predicting the Influence of Execution Platform on Software Component Performance

    Get PDF
    The performance of software components depends on several factors, including the execution platform on which the software components run. To simplify cross-platform performance prediction in relocation and sizing scenarios, a novel approach is introduced in this thesis which separates the application performance profile from the platform performance profile. The approach is evaluated using transparent instrumentation of Java applications and with automated benchmarks for Java Virtual Machines

    Optimizations and Cost Models for multi-core architectures: an approach based on parallel paradigms

    Get PDF
    The trend in modern microprocessor architectures is clear: multi-core chips are here to stay, and researchers expect multiprocessors with 128 to 1024 cores on a chip in some years. Yet the software community is slowly taking the path towards parallel programming: while some works target multi-cores, these are usually inherited from the previous tools for SMP architectures, and rarely exploit specific characteristics of multi-cores. But most important, current tools have no facilities to guarantee performance or portability among architectures. Our research group was one of the first to propose the structured parallel programming approach to solve the problem of performance portability and predictability. This has been successfully demonstrated years ago for distributed and shared memory multiprocessors, and we strongly believe that the same should be applied to multi-core architectures. The main problem with performance portability is that optimizations are effective only under specific conditions, making them dependent on both the specific program and the target architecture. For this reason in current parallel programming (in general, but especially with multi-cores) optimizations usually follows a try-and-decide approach: each one must be implemented and tested on the specific parallel program to understand its benefits. If we want to make a step forward and really achieve some form of performance portability, we require some kind of prediction of the expected performance of a program. The concept of performance modeling is quite old in the world of parallel programming; yet, in the last years, this kind of research saw small improvements: cost models to describe multi-cores are missing, mainly because of the increasing complexity of microarchitectures and the poor knowledge of specific implementation details of current processors. In the first part of this thesis we prove that the way of performance modeling is still feasible, by studying the Tilera TilePro64. The high number of cores on-chip in this processor (64) required the use of several innovative solutions, such as a complex interconnection network and the use of multiple memory interfaces per chip. For these features the TilePro64 can be considered an insight of what to expect in future multi-core processors. The availability of a cycle-accurate simulator and an extensive documentation allowed us to model the architecture, and in particular its memory subsystem, at the accuracy level required to compare optimizations In the second part, focused on optimizations, we cover one of the most important issue of multi-core architectures: the memory subsystem. In this area multi-core strongly differs in their structure w.r.t off-chip parallel architectures, both SMP and NUMA, thus opening new opportunities. In detail, we investigate the problem of data distribution over the memory controllers in several commercial multi-cores, and the efficient use of the cache coherency mechanisms offered by the TilePro64 processor. Finally, by using the performance model, we study different implementations, derived from the previous optimizations, of a simple test-case application. We are able to predict the best version using only profiled data from a sequential execution. The accuracy of the model has been verified by experimentally comparing the implementations on the real architecture, giving results within 1 − 2% of accuracy

    Benchmark semantics

    No full text

    Benchmark semantics

    No full text
    corecore