5,891 research outputs found

    Specifying Reusable Components

    Full text link
    Reusable software components need expressive specifications. This paper outlines a rigorous foundation to model-based contracts, a method to equip classes with strong contracts that support accurate design, implementation, and formal verification of reusable components. Model-based contracts conservatively extend the classic Design by Contract with a notion of model, which underpins the precise definitions of such concepts as abstract equivalence and specification completeness. Experiments applying model-based contracts to libraries of data structures suggest that the method enables accurate specification of practical software

    Poincaré on the Foundation of Geometry in the Understanding

    Get PDF
    This paper is about Poincaré’s view of the foundations of geometry. According to the established view, which has been inherited from the logical positivists, Poincaré, like Hilbert, held that axioms in geometry are schemata that provide implicit definitions of geometric terms, a view he expresses by stating that the axioms of geometry are “definitions in disguise.” I argue that this view does not accord well with Poincaré’s core commitment in the philosophy of geometry: the view that geometry is the study of groups of operations. In place of the established view I offer a revised view, according to which Poincaré held that axioms in geometry are in fact assertions about invariants of groups. Groups, as forms of the understanding, are prior in conception to the objects of geometry and afford the proper definition of those objects, according to Poincaré. Poincaré’s view therefore contrasts sharply with Kant’s foundation of geometry in a unique form of sensibility. According to my interpretation, axioms are not definitions in disguise because they themselves implicitly define their terms, but rather because they disguise the definitions which imply them

    Termination, correctness and relative correctness

    Get PDF
    Over the last decade, research in verification and formal methods has been the subject of increased interest with the need of more secure and dependable software. At the heart of software dependability is the concept of software fault, defined in the literature as the adjudged or hypothesized cause of an error. This definition, which lacks precision, presents at least two challenges with regard to using formal methods: (1) Adjudging and hypothesizing are highly subjective human endeavors; (2) The concept of error is itself insufficiently defined, since it depends on a detailed characterization of correct system states at each stage of a computation (which is usually unavailable). In the process of defining what a software fault is, the concept of relative correctness, the property of a program to be more-correct than another with respect to a given specification, is discussed. Subsequently, a feature of a program is a fault (for a given specification) only because there exists an alternative to it that would make the program more-correct with respect to the specification. Furthermore, the implications and applications of relative correctness in various software engineering activities are explored. It is then illustrated that in many situations of software testing, fault removal and program repair, testing for relative correctness rather than absolute correctness leads to clearer conclusions and better outcomes. In particular, debugging without testing, a technique whereby, a fault can be removed from a program and the new program proven to be more-correct than the original, all without any testing (and its associated uncertainties/imperfections) is introduced. Given that there are orders of magnitude more incorrect programs than correct programs in use nowadays, this has the potential to expand the scope of proving methods significantly. Another technique, programming without refining, is also introduced. The most important advantage of program derivation by correctness enhancement is that it captures not only program construction from scratch, but also virtually all activities of software evolution. Given that nowadays most software is developed by evolving existing assets rather than producing new assets from scratch, the paradigm of software evolution by correctness enhancements stands to yield significant gains, if we can make it practical

    Extreme Value Analysis of Empirical Frame Coefficients and Implications for Denoising by Soft-Thresholding

    Full text link
    Denoising by frame thresholding is one of the most basic and efficient methods for recovering a discrete signal or image from data that are corrupted by additive Gaussian white noise. The basic idea is to select a frame of analyzing elements that separates the data in few large coefficients due to the signal and many small coefficients mainly due to the noise \epsilon_n. Removing all data coefficients being in magnitude below a certain threshold yields a reconstruction of the original signal. In order to properly balance the amount of noise to be removed and the relevant signal features to be kept, a precise understanding of the statistical properties of thresholding is important. For that purpose we derive the asymptotic distribution of max_{\omega \in \Omega_n} || for a wide class of redundant frames (\phi_\omega^n: \omega \in \Omega_n}. Based on our theoretical results we give a rationale for universal extreme value thresholding techniques yielding asymptotically sharp confidence regions and smoothness estimates corresponding to prescribed significance levels. The results cover many frames used in imaging and signal recovery applications, such as redundant wavelet systems, curvelet frames, or unions of bases. We show that `generically' a standard Gumbel law results as it is known from the case of orthonormal wavelet bases. However, for specific highly redundant frames other limiting laws may occur. We indeed verify that the translation invariant wavelet transform shows a different asymptotic behaviour.Comment: [Content: 39 pages, 4 figures] Note that in this version 4 we have slightely changed the title of the paper and we have rewritten parts of the introduction. Except for corrected typos the other parts of the paper are the same as the original versions

    Loop summarization using state and transition invariants

    Get PDF
    This paper presents algorithms for program abstraction based on the principle of loop summarization, which, unlike traditional program approximation approaches (e.g., abstract interpretation), does not employ iterative fixpoint computation, but instead computes symbolic abstract transformers with respect to a set of abstract domains. This allows for an effective exploitation of problem-specific abstract domains for summarization and, as a consequence, the precision of an abstract model may be tailored to specific verification needs. Furthermore, we extend the concept of loop summarization to incorporate relational abstract domains to enable the discovery of transition invariants, which are subsequently used to prove termination of programs. Well-foundedness of the discovered transition invariants is ensured either by a separate decision procedure call or by using abstract domains that are well-founded by construction. We experimentally evaluate several abstract domains related to memory operations to detect buffer overflow problems. Also, our light-weight termination analysis is demonstrated to be effective on a wide range of benchmarks, including OS device driver

    Algebraic Theory of Multi-Product Decisions, An

    Get PDF
    The typical firm produces for sale a plural number of distinct product lines. This paper characterizes the composition of a firm?s optimal production vector as a function of cost and revenue function attributes. The approach taken applies mathematical group theory and revealed preference arguments to exploit controlled asymmetries in the production environment. Assuming some symmetry on the cost function, our central result shows that all optimal production vectors must satisfy a dominance relation on permutations of the firm?s revenue function. When the revenue function is linear in outputs, then the set of admissible output vectors has linear bounds up to transformations. If these transformations are also linear, then convex analysis can be applied to characterize the set of admissible solutions. When the group of symmetries decomposes into a direct product group with index K in N, then the characterization problem separates into K problems of smaller dimension. The central result may be strengthened ; when the cost function is assumed to be quasiconvex.
    • …
    corecore