449 research outputs found

    Testing an Intentional Naming Scheme Using Genetic Algorithms

    Full text link
    Abstract. Various attempts have been made to use genetic algorithms (GAs) for software testing, a problem that consumes a large amount of time and eort in software development. We demonstrate the use of GAs in automating testing of complex data structures and methods for manipulating them, which to our knowledge has not been successfully displayed before on non-trivial software structures. We evaluate the ef-fectiveness of our GA-based test suite generation technique by applying it to test the design and implementation of the Intentional Naming Sys-tem (INS), a new scheme for resource discovery and service location in a dynamic networked environment. Our analysis using GAs reveals serious problems with both the design of INS and its inventors ' implementation.

    A Dual-Engine for Early Analysis of Critical Systems

    Get PDF
    This paper presents a framework for modeling, simulating, and checking properties of critical systems based on the Alloy language -- a declarative, first-order, relational logic with a built-in transitive closure operator. The paper introduces a new dual-analysis engine that is capable of providing both counterexamples and proofs. Counterexamples are found fully automatically using an SMT solver, which provides a better support for numerical expressions than the existing Alloy Analyzer. Proofs, however, cannot always be found automatically since the Alloy language is undecidable. Our engine offers an economical approach by first trying to prove properties using a fully-automatic, SMT-based analysis, and switches to an interactive theorem prover only if the first attempt fails. This paper also reports on applying our framework to Microsoft's COM standard and the mark-and-sweep garbage collection algorithm.Comment: Workshop on Dependable Software for Critical Infrastructures (DSCI), Berlin 201

    A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML)

    Full text link
    This paper introduces the MCML approach for empirically studying the learnability of relational properties that can be expressed in the well-known software design language Alloy. A key novelty of MCML is quantification of the performance of and semantic differences among trained machine learning (ML) models, specifically decision trees, with respect to entire (bounded) input spaces, and not just for given training and test datasets (as is the common practice). MCML reduces the quantification problems to the classic complexity theory problem of model counting, and employs state-of-the-art model counters. The results show that relatively simple ML models can achieve surprisingly high performance (accuracy and F1-score) when evaluated in the common setting of using training and test datasets - even when the training dataset is much smaller than the test dataset - indicating the seeming simplicity of learning relational properties. However, MCML metrics based on model counting show that the performance can degrade substantially when tested against the entire (bounded) input space, indicating the high complexity of precisely learning these properties, and the usefulness of model counting in quantifying the true performance

    Naming and discovery in networks : architecture and economics

    Get PDF
    In less than three decades, the Internet was transformed from a research network available to the academic community into an international communication infrastructure. Despite its tremendous success, there is a growing consensus in the research community that the Internet has architectural limitations that need to be addressed in a effort to design a future Internet. Among the main technical limitations are the lack of mobility support, and the lack of security and trust. The Internet, and particularly TCP/IP, identifies endpoints using a location/routing identifier, the IP address. Coupling the endpoint identifier to the location identifier hinders mobility and poorly identifies the actual endpoint. On the other hand, the lack of security has been attributed to limitations in both the network and the endpoint. Authentication for example is one of the main concerns in the architecture and is hard to implement partly due to lack of identity support. The general problem that this dissertation is concerned with is that of designing a future Internet. Towards this end, we focus on two specific sub-problems. The first problem is the lack of a framework for thinking about architectures and their design implications. It was obvious after surveying the literature that the majority of the architectural work remains idiosyncratic and descriptions of network architectures are mostly idiomatic. This has led to the overloading of architectural terms, and to the emergence of a large body of network architecture proposals with no clear understanding of their cross similarities, compatibility points, their unique properties, and architectural performance and soundness. On the other hand, the second problem concerns the limitations of traditional naming and discovery schemes in terms of service differentiation and economic incentives. One of the recurring themes in the community is the need to separate an entity\u27s identifier from its locator to enhance mobility and security. Separation of identifier and locator is a widely accepted design principle for a future Internet. Separation however requires a process to translate from the identifier to the locator when discovering a network path to some identified entity. We refer to this process as identifier-based discovery, or simply discovery, and we recognize two limitations that are inherent in the design of traditional discovery schemes. The first limitation is the homogeneity of the service where all entities are assumed to have the same discovery performance requirements. The second limitation is the inherent incentive mismatch as it relates to sharing the cost of discovery. This dissertation addresses both subproblems, the architectural framework as well as the naming and discovery limitations

    Declarative symbolic pure-logic model checking

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 173-181).Model checking, a technique for findings errors in systems, involves building a formal model that describes possible system behaviors and correctness conditions, and using a tool to search for model behaviors violating correctness properties. Existing model checkers are well-suited for analyzing control-intensive algorithms (e.g. network protocols with simple node state). Many important analyses, however, fall outside the capabilities of existing model checkers. Examples include checking algorithms with complex state, distributed algorithms over all network topologies, and highly declarative models. This thesis addresses the problem of building an efficient model checker that overcomes these limitations. The work builds on Alloy, a relational modeling language. Previous work has defined the language and shown that it can be analyzed by translation to SAT. The primary contributions of this thesis include: a modeling paradigm for describing complex structures in Alloy; significant improvements in scalability of the analyzer; and improvements in usability of the analyzer via addition of a debugger for over constraints. Together, these changes make model-checking practical for important new classes of analyses. While the work was done in the context of Alloy, some techniques generalize to other verification tools.by Ilya A. Shlyakhter.S.M

    Agile Validation of Model Transformations using Compound F-Alloy Specifications

    Get PDF
    Model transformations play a key role in model driven software engineering approaches. Validation of model transformations is crucial for the quality assurance of software systems to be constructed. The relational logic based specification language Alloy and its accompanying tool the Alloy Analyzer have been used in the past to validate properties of model transformations. However Alloy based analysis of transformations suffers from several limitations. On one hand, it is time consuming and does not scale well. On the other hand, the reliance on Alloy, being a formal method, prevents the effective involvement of domain experts in the validation process which is crucial for pinpointing domain pertinent errors. Those limitations are even more severe when it comes to transformations whose input and/or output are themselves transformations (called compound transformations) because they are inherently more complex. To tackle the performance and scalability limitations, in previous work, we proposed an Alloy-based Domain Specific Language (DSL), called F-Alloy, that is tailored for model transformation specifications. Instead of pure analysis based validation, F-Alloy speeds up the validation of model transformations by applying a hybrid strategy that combines analysis with interpretation. In this paper, we formalize the notion of “hybrid analysis” and further extended it to also support efficient validation of compound transformations. To enable the effective involvement of domain experts in the validation process, we propose in this paper a new approach to model transformation validation, called Visualization-Based Validation (briefly VBV). Following VBV, representative instances of a to-be-validated model transformation are automatically generated by hybrid analysis and shown to domain experts for feedback in a visual notation that they are familiar with. We prescribe a process to guide the application of VBV to model transformations and illustrate it with a benchmark model transformation

    CLiFF Notes: Research in the Language Information and Computation Laboratory of The University of Pennsylvania

    Get PDF
    This report takes its name from the Computational Linguistics Feedback Forum (CLIFF), an informal discussion group for students and faculty. However the scope of the research covered in this report is broader than the title might suggest; this is the yearly report of the LINC Lab, the Language, Information and Computation Laboratory of the University of Pennsylvania. It may at first be hard to see the threads that bind together the work presented here, work by faculty, graduate students and postdocs in the Computer Science, Psychology, and Linguistics Departments, and the Institute for Research in Cognitive Science. It includes prototypical Natural Language fields such as: Combinatorial Categorial Grammars, Tree Adjoining Grammars, syntactic parsing and the syntax-semantics interface; but it extends to statistical methods, plan inference, instruction understanding, intonation, causal reasoning, free word order languages, geometric reasoning, medical informatics, connectionism, and language acquisition. With 48 individual contributors and six projects represented, this is the largest LINC Lab collection to date, and the most diverse
    corecore