83,173 research outputs found

    An experience of modularity through design

    Get PDF
    We aim to utilise the experiences of a marine industry-based design team to determine the need for research into a modular design methodology in an industrial environment. In order to achieve this we couple the outcome of a current design project with the findings of a recent literature survey with the objectives of firstly, clarifying why a methodology is required and, secondly, defining the key elements which the methodology would have to realise or address. The potential benefits of modularity have long been recognised in the shipbuilding industry. Many shipbuilders adopt a 'module' approach to ship construction whereby the ship structure is separated into a number of large structural 'blocks' to ease manufacture and manoeuvrability during construction. However, as understanding of the capabilities of modularity as a design tool develops there is increased interest in capitalising on the differing life phase benefits of modularity such as reduced design costs and time, increased ease of maintenance, upgrade, re-use, redesign and standardisation across individual products and product families. This is especially pertinent in naval shipbuilding where the maintenance of a class of ship requires that all previously designed ships in that class must be of similar outfitting and must be able to interface with the new ship, in terms of propulsion, weapons, communications and electronics, and thus often require some form of retrofit. Therefore, many shipbuilders are moving from viewing modularity as a purely 'manufacturing' principle to a design centred principle. However, as noted by Chang and Ward 'none of the design theories or tools in the mechanical world serves as an articulate procedure for designers to follow in practising modular design'. Thus, despite the identification of a need to introduce modular principles at an earlier stage than detail design and construction, there is little aid in the form of tools, techniques and methodologies for designers in practice

    Lightweight Multilingual Software Analysis

    Full text link
    Developer preferences, language capabilities and the persistence of older languages contribute to the trend that large software codebases are often multilingual, that is, written in more than one computer language. While developers can leverage monolingual software development tools to build software components, companies are faced with the problem of managing the resultant large, multilingual codebases to address issues with security, efficiency, and quality metrics. The key challenge is to address the opaque nature of the language interoperability interface: one language calling procedures in a second (which may call a third, or even back to the first), resulting in a potentially tangled, inefficient and insecure codebase. An architecture is proposed for lightweight static analysis of large multilingual codebases: the MLSA architecture. Its modular and table-oriented structure addresses the open-ended nature of multiple languages and language interoperability APIs. We focus here as an application on the construction of call-graphs that capture both inter-language and intra-language calls. The algorithms for extracting multilingual call-graphs from codebases are presented, and several examples of multilingual software engineering analysis are discussed. The state of the implementation and testing of MLSA is presented, and the implications for future work are discussed.Comment: 15 page

    Procedure-modular specification and verification of temporal safety properties

    Get PDF
    This paper describes ProMoVer, a tool for fully automated procedure-modular verification of Java programs equipped with method-local and global assertions that specify safety properties of sequences of method invocations. Modularity at the procedure-level is a natural instantiation of the modular verification paradigm, where correctness of global properties is relativized on the local properties of the methods rather than on their implementations. Here, it is based on the construction of maximal models for a program model that abstracts away from program data. This approach allows global properties to be verified in the presence of code evolution, multiple method implementations (as arising from software product lines), or even unknown method implementations (as in mobile code for open platforms). ProMoVer automates a typical verification scenario for a previously developed tool set for compositional verification of control flow safety properties, and provides appropriate pre- and post-processing. Both linear-time temporal logic and finite automata are supported as formalisms for expressing local and global safety properties, allowing the user to choose a suitable format for the property at hand. Modularity is exploited by a mechanism for proof reuse that detects and minimizes the verification tasks resulting from changes in the code and the specifications. The verification task is relatively light-weight due to support for abstraction from private methods and automatic extraction of candidate specifications from method implementations. We evaluate the tool on a number of applications from the domains of Java Card and web-based application

    Modular Composition of Language Features through Extensions of Semantic Language Models

    Get PDF
    Today, programming or specification languages are often extended in order to customize them for a particular application domain or to refine the language definition. The extension of a semantic model is often at the centre of such an extension. We will present a framework for linking basic and extended models. The example which we are going to use is the RSL concurrency model. The RAISE specification language RSL is a formal wide-spectrum specification language which integrates different features, such as state-basedness, concurrency and modules. The concurrency features of RSL are based on a refinement of a classical denotational model for process algebras. A modification was necessary to integrate state-based features into the basic model in order to meet requirements in the design of RSL. We will investigate this integration, formalising the relationship between the basic model and the adapted version in a rigorous way. The result will be a modular composition of the basic process model and new language features, such as state-based features or input/output. We will show general mechanisms for integration of new features into a language by extending language models in a structured, modular way. In particular, we will concentrate on the preservation of properties of the basic model in these extensions

    A New Constructivist AI: From Manual Methods to Self-Constructive Systems

    Get PDF
    The development of artificial intelligence (AI) systems has to date been largely one of manual labor. This constructionist approach to AI has resulted in systems with limited-domain application and severe performance brittleness. No AI architecture to date incorporates, in a single system, the many features that make natural intelligence general-purpose, including system-wide attention, analogy-making, system-wide learning, and various other complex transversal functions. Going beyond current AI systems will require significantly more complex system architecture than attempted to date. The heavy reliance on direct human specification and intervention in constructionist AI brings severe theoretical and practical limitations to any system built that way. One way to address the challenge of artificial general intelligence (AGI) is replacing a top-down architectural design approach with methods that allow the system to manage its own growth. This calls for a fundamental shift from hand-crafting to self-organizing architectures and self-generated code ā€“ what we call a constructivist AI approach, in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI will be very different from todayā€™s software development methods; instead of relying on direct design of mental functions and their implementation in a cog- nitive architecture, they must address the principles ā€“ the ā€œseedsā€ ā€“ from which a cognitive architecture can automatically grow. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift

    Technology and skills in the construction industry

    Get PDF

    Supporting the automated generation of modular product line safety cases

    Get PDF
    Abstract The effective reuse of design assets in safety-critical Software Product Lines (SPL) would require the reuse of safety analyses of those assets in the variant contexts of certification of products derived from the SPL. This in turn requires the traceability of SPL variation across design, including variation in safety analysis and safety cases. In this paper, we propose a method and tool to support the automatic generation of modular SPL safety case architectures from the information provided by SPL feature modeling and model-based safety analysis. The Goal Structuring Notation (GSN) safety case modeling notation and its modular extensions supported by the D-Case Editor were used to implement the method in an automated tool support. The tool was used to generate a modular safety case for an automotive Hybrid Braking System SPL

    An object-oriented approach to application generation

    Get PDF
    The TUBA system consists of a set of integrated tools for the generation of business-oriented applications. Tools and applications have a modular structure, represented by class objects. The article describes the architecture of the environments for file processing, screen handling and report writing

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL
    • ā€¦
    corecore