35 research outputs found

    The theory of classification part 20: modular checking of classtypes

    Get PDF
    A first-order type system has two things to commend it. Firstly, it is quite simple to implement a type-checker that can check types for exact correspondence, or for subtype compatibility with a given type. The type of the source object can be compared with that of the target variable to see if the former can be converted up to the latter, using subtyping rules like those we discussed in [1]. Secondly, code that has been checked once need never be checked again, or recompiled in new contexts. This is because the type system can never reveal more specific information about an object that is passed into a more general variable (which we have called the “type loss problem”), so the code need only be checked once over the most general type that it can accept

    The Theory of Classification Part 20: Modular Checking of Classtypes.

    Full text link

    Efficient Type Representation in TAL

    Get PDF
    Certifying compilers generate proofs for low-level code that guarantee safety properties of the code. Type information is an essential part of safety proofs. But the size of type information remains a concern for certifying compilers in practice. This paper demonstrates type representation techniques in a large-scale compiler that achieves both concise type information and efficient type checking. In our 200,000-line certifying compiler, the size of type information is about 36% of the size of pure code and data for our benchmarks, the best result to the best of our knowledge. The type checking time is about 2% of the compilation time

    Extension polymorphism

    Get PDF
    Any system that models a real world application has to evolve to be consistent with its changing domain. Dealing with evolution in an effective manner is particularly important for those systems that may store large amounts of data such as databases and persistent languages. In persistent programming systems, one of the important issues in dealing with evolution is the specification of code that will continue to work in a type safe way despite changes to type definitions. Polymorphism is one mechanism which allows code to work over many types. Inclusion polymorphism is often said to be a model of type evolution. However, observing type changes in persistent systems has shown that types most commonly exhibit additive evolution. Even though inclusion captures this pattern in the case of record types, it does not always do so for other type constructors. The confusion of subtyping, inheritance and evolution often leads to unsound or at best, dynamically typed systems. Existing solutions to this problem do not completely address the requirements of type evolution in persistent systems. The aim of this thesis is to develop a form of polymorphism that is suitable for modelling additive evolution in persistent systems. The proposed strategy is to study patterns of evolution for the most generally used type constructors in persistent languages and to define a new relation, called extension, which models these patterns. This relation is defined independent of any existing relations used for dealing with evolution. A programming language mechanism is then devised to provide polymorphism over this relation. The polymorphism thus defined is called extension polymorphism. This thesis presents work involving the design and definition of extension polymorphism and an implementation of a type checker for this polymorphism. A proof of soundness for a type system which supports extension polymorphism is also presented

    Pure subtype systems: a type theory for extensible software

    Get PDF
    This thesis presents a novel approach to type theory called “pure subtype systems”, and a core calculus called DEEP which is based on that approach. DEEP is capable of modeling a number of interesting language techniques that have been proposed in the literature, including mixin modules, virtual classes, feature-oriented programming, and partial evaluation. The design of DEEP was motivated by two well-known problems: “the expression problem”, and “the tag elimination problem.” The expression problem is concerned with the design of an interpreter that is extensible, and requires an advanced module system. The tag elimination problem is concerned with the design of an interpreter that is efficient, and requires an advanced partial evaluator. We present a solution in DEEP that solves both problems simultaneously, which has never been done before. These two problems serve as an “acid test” for advanced type theories, because they make heavy demands on the static type system. Our solution in DEEP makes use of the following capabilities. (1) Virtual types are type definitions within a module that can be extended by clients of the module. (2) Type definitions may be mutually recursive. (3) Higher-order subtyping and bounded quantification are used to represent partial information about types. (4) Dependent types and singleton types provide increased type precision. The combination of recursive types, virtual types, dependent types, higher-order subtyping, and bounded quantification is highly non-trivial. We introduce “pure subtype systems” as a way of managing this complexity. Pure subtype systems eliminate the distinction between types and objects; every term can behave as either a type or an object depending on context. A subtype relation is defined over all terms, and subtyping, rather than typing, forms the basis of the theory. We show that higher-order subtyping is strong enough to completely subsume the traditional type relation, and we provide practical algorithms for type checking and for finding minimal types. The cost of using pure subtype systems lies in the complexity of the meta-theory. Unfortunately, we are unable to establish some basic meta-theoretic properties, such as type safety and transitivity elimination, although we have made some progress towards these goals. We formulate the subtype relation as an abstract reduction system, and we show that the type theory is sound if the reduction system is confluent. We can prove that reductions are locally confluent, but a proof of global confluence remains elusive. In summary, pure subtype systems represent a new and interesting approach to type theory. This thesis describes the basic properties of pure subtype systems, and provides concrete examples of how they can be applied. The Deep calculus demonstrates that our approach has a number of real-world practical applications in areas that have proved to be quite difficult for traditional type theories to handle. However, the ultimate soundness of the technique remains an open question

    Modeling and Analysis of Software Product Line Variability in Clafer

    Get PDF
    Both feature and class modeling are used in Software Product Line (SPL) engineering to model variability. Feature models are used primarily to represent user-visible characteristics (i.e., features) of products; whereas class models are often used to model types of components and connectors in a product-line architecture. Previous works have explored the approach of using a single language to express both configurations of features and components. Their goal was to simplify the definition and analysis of feature-to-component mappings and to allow modeling component options as features. A prominent example of this approach is cardinality-based feature modeling, which extends feature models with multiple instantiation and references to express component-like, replicated features. Another example is to support feature modeling in a class modeling language, such as UML or MOF, using their profiling mechanisms and a stylized use of composition. Both examples have notable drawbacks: cardinality-based feature modeling lacks a constraint language and a well-defined semantics; encoding feature models as class models and their evolution bring extra complexity. This dissertation presents Clafer (class, feature, reference), a class modeling language with first-class support for feature modeling. Clafer can express rich structural models augmented with complex constraints, i.e., domain, variability, component models, and meta-models. Clafer supports: (i) class-based meta-models, (ii) object models (with uncertainty, if needed), (iii) feature models with attributes and multiple instantiation, (iv) configurations of feature models, (v) mixtures of meta- and feature models and model templates, and (vi) first-order logic constraints. Clafer also makes it possible to arrange models into multiple specialization and extension layers via constraints and inheritance. On the other hand, in designing Clafer we wanted to create a language that builds upon as few concepts as possible, and is easy to learn. The language is supported by tools for SPL verification and optimization. We propose to unify basic modeling constructs into a single concept, called clafer. In other words, Clafer is not a hybrid language. We identify several key mechanisms allowing a class modeling language to express feature models concisely. We provide Clafer with a formal semantics built in a novel, structurally explicit way. As Clafer subsumes cardinality-based feature modeling with attributes, references, and constraints, we are the first to precisely define semantics of such models. We also explore the notion of partial instantiation that allows for modeling with uncertainty and variability. We show that Object-Oriented Modeling (OOM) languages with no direct support for partial instances can support them via class modeling, using subclassing and strengthening multiplicity constraints. We make the encoding of partial instances via subclassing precise and general. Clafer uses this encoding and pushes the idea even further: it provides a syntactic unification of types and (partial) instances via subclassing and redefinition. We evaluate Clafer analytically and experimentally. The analytical evaluation shows that Clafer can concisely express feature and meta-models via a uniform syntax and unified semantics. The experimental evaluation shows that: 1) Clafer can express a variety of realistic rich structural models with complex constraints, such as variability models, meta-models, model templates, and domain models; and 2) that useful analyses can be performed within seconds

    Proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification

    Get PDF
    This NASA conference publication contains the proceedings of the Third International Workshop on Proof-Carrying Code and Software Certification, held as part of LICS in Los Angeles, CA, USA, on August 15, 2009. Software certification demonstrates the reliability, safety, or security of software systems in such a way that it can be checked by an independent authority with minimal trust in the techniques and tools used in the certification process itself. It can build on existing validation and verification (V&V) techniques but introduces the notion of explicit software certificates, Vvilich contain all the information necessary for an independent assessment of the demonstrated properties. One such example is proof-carrying code (PCC) which is an important and distinctive approach to enhancing trust in programs. It provides a practical framework for independent assurance of program behavior; especially where source code is not available, or the code author and user are unknown to each other. The workshop wiII address theoretical foundations of logic-based software certification as well as practical examples and work on alternative application domains. Here "certificate" is construed broadly, to include not just mathematical derivations and proofs but also safety and assurance cases, or any fonnal evidence that supports the semantic analysis of programs: that is, evidence about an intrinsic property of code and its behaviour that can be independently checked by any user, intermediary, or third party. These guarantees mean that software certificates raise trust in the code itself, distinct from and complementary to any existing trust in the creator of the code, the process used to produce it, or its distributor. In addition to the contributed talks, the workshop featured two invited talks, by Kelly Hayhurst and Andrew Appel. The PCC 2009 website can be found at http://ti.arc.nasa.gov /event/pcc 091

    The Essence of Nested Composition

    Get PDF
    Calculi with disjoint intersection types support an introduction form for intersections called the merge operator, while retaining a coherent semantics. Disjoint intersections types have great potential to serve as a foundation for powerful, flexible and yet type-safe and easy to reason OO languages. This paper shows how to significantly increase the expressive power of disjoint intersection types by adding support for nested subtyping and composition, which enables simple forms of family polymorphism to be expressed in the calculus. The extension with nested subtyping and composition is challenging, for two different reasons. Firstly, the subtyping relation that supports these features is non-trivial, especially when it comes to obtaining an algorithmic version. Secondly, the syntactic method used to prove coherence for previous calculi with disjoint intersection types is too inflexible, making it hard to extend those calculi with new features (such as nested subtyping). We show how to address the first problem by adapting and extending the Barendregt, Coppo and Dezani (BCD) subtyping rules for intersections with records and coercions. A sound and complete algorithmic system is obtained by using an approach inspired by Pierce\u27s work. To address the second problem we replace the syntactic method to prove coherence, by a semantic proof method based on logical relations. Our work has been fully formalized in Coq, and we have an implementation of our calculus

    Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration

    Get PDF
    The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite

    SAVCBS 2004 Specification and Verification of Component-Based Systems: Workshop Proceedings

    Get PDF
    This is the proceedings of the 2004 SAVCBS workshop. The workshop is concerned with how formal (i.e., mathematical) techniques can be or should be used to establish a suitable foundation for the specification and verification of component-based systems. Component-based systems are a growing concern for the software engineering community. Specification and reasoning techniques are urgently needed to permit composition of systems from components. Component-based specification and verification is also vital for scaling advanced verification techniques such as extended static analysis and model checking to the size of real systems. The workshop considers formalization of both functional and non-functional behavior, such as performance or reliability
    corecore