1,414 research outputs found

    Standardized development of computer software. Part 1: Methods

    Get PDF
    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change

    Sequential category aggregation and partitioning approaches for multi-way contingency tables based on survey and census data

    Full text link
    Large contingency tables arise in many contexts but especially in the collection of survey and census data by government statistical agencies. Because the vast majority of the variables in this context have a large number of categories, agencies and users need a systematic way of constructing tables which are summaries of such contingency tables. We propose such an approach in this paper by finding members of a class of restricted log-linear models which maximize the likelihood of the data and use this to find a parsimonious means of representing the table. In contrast with more standard approaches for model search in hierarchical log-linear models (HLLM), our procedure systematically reduces the number of categories of the variables. Through a series of examples, we illustrate the extent to which it can preserve the interaction structure found with HLLMs and be used as a data simplification procedure prior to HLL modeling. A feature of the procedure is that it can easily be applied to many tables with millions of cells, providing a new way of summarizing large data sets in many disciplines. The focus is on information and description rather than statistical testing. The procedure may treat each variable in the table in different ways, preserving full detail, treating it as fully nominal, or preserving ordinality.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS175 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A lightweight, graph-theoretic model of class-based similarity to support object-oriented code reuse.

    Get PDF
    The work presented in this thesis is principally concerned with the development of a method and set of tools designed to support the identification of class-based similarity in collections of object-oriented code. Attention is focused on enhancing the potential for software reuse in situations where a reuse process is either absent or informal, and the characteristics of the organisation are unsuitable, or resources unavailable, to promote and sustain a systematic approach to reuse. The approach builds on the definition of a formal, attributed, relational model that captures the inherent structure of class-based, object-oriented code. Based on code-level analysis, it relies solely on the structural characteristics of the code and the peculiarly object-oriented features of the class as an organising principle: classes, those entities comprising a class, and the intra and inter-class relationships existing between them, are significant factors in defining a two-phase similarity measure as a basis for the comparison process. Established graph-theoretic techniques are adapted and applied via this model to the problem of determining similarity between classes. This thesis illustrates a successful transfer of techniques from the domains of molecular chemistry and computer vision. Both domains provide an existing template for the analysis and comparison of structures as graphs. The inspiration for representing classes as attributed relational graphs, and the application of graph-theoretic techniques and algorithms to their comparison, arose out of a well-founded intuition that a common basis in graph-theory was sufficient to enable a reasonable transfer of these techniques to the problem of determining similarity in object-oriented code. The practical application of this work relates to the identification and indexing of instances of recurring, class-based, common structure present in established and evolving collections of object-oriented code. A classification so generated additionally provides a framework for class-based matching over an existing code-base, both from the perspective of newly introduced classes, and search "templates" provided by those incomplete, iteratively constructed and refined classes associated with current and on-going development. The tools and techniques developed here provide support for enabling and improving shared awareness of reuse opportunity, based on analysing structural similarity in past and ongoing development, tools and techniques that can in turn be seen as part of a process of domain analysis, capable of stimulating the evolution of a systematic reuse ethic

    Standardized development of computer software. Part 2: Standards

    Get PDF
    This monograph contains standards for software development and engineering. The book sets forth rules for design, specification, coding, testing, documentation, and quality assurance audits of software; it also contains detailed outlines for the documentation to be produced

    An extensive English language bibliography on graph theory and its applications

    Get PDF
    Bibliography on graph theory and its application

    Decision-making and problem-solving methods in automation technology

    Get PDF
    The state of the art in the automation of decision making and problem solving is reviewed. The information upon which the report is based was derived from literature searches, visits to university and government laboratories performing basic research in the area, and a 1980 Langley Research Center sponsored conferences on the subject. It is the contention of the authors that the technology in this area is being generated by research primarily in the three disciplines of Artificial Intelligence, Control Theory, and Operations Research. Under the assumption that the state of the art in decision making and problem solving is reflected in the problems being solved, specific problems and methods of their solution are often discussed to elucidate particular aspects of the subject. Synopses of the following major topic areas comprise most of the report: (1) detection and recognition; (2) planning; and scheduling; (3) learning; (4) theorem proving; (5) distributed systems; (6) knowledge bases; (7) search; (8) heuristics; and (9) evolutionary programming

    INDUCTIVELY COUPLED PLASMA-OPTICAL EMISSION SPECTROMETRY FOR FORENSIC ANALYSIS

    Get PDF
    The fundamental characteristics and applications of inductively coupled plasma - optical emission spectrometry (ICP-OES) for forensic science purposes have been evaluated. Optimisation of ICP-OES for single elements using simplex techniques identified an ICP torch fitted with a wide bore injector tube as most suitable for multielement analysis because of a compact analytical region in the plasma. A suitable objective function has been proposed for multielement simplex optimisation of ICP-OES and its effectiveness has been demonstrated. The effects of easily ionisable element (EIE) interferences have been studied and an interference minimisation simplex optimisation shown to be appropriate for the location of an interference free zone. Routine, interference free determinations (<2% for 0.5% Na) have been shown to be critically dependant on the stability of the injector gas flowrate and nebuliser derived pressure pulses. Discrete nebulisation has been investigated for the analysis of small fragments of a variety of metal alloys which could be encountered in casework investigations. External contamination together with alloy inhomogeneity have been shown to present some problems in the interpretation of the data. A compact, corrosion resistant recirculating nebuliser has been constructed and evaluated for the analysis of small fragments of shotgun steels. The stable aerosol production from this nebuliser allowed a set of element lines, free from iron interferences, to be monitored with a scanning monochromator. The analysis, classification and discrimination of casework sized fragments of brasses and sheet glasses have been performed and a method has been proposed for the analysis of white household gloss paints. The determination of metal traces on hands following the handling of a variety of metal alloys has been reported. The significance of the results from these evidential materials has been assessed for forensic science purposes

    Generalization of the pinball contact/impact model for use with mesh adaptivity and element erosion in EUROPLEXUS

    Get PDF
    This report presents the generalization of the pinball-based contact-impact model (PINB) of the EUROPLEXUS code for use in conjunction with mesh adaptivity, i.e. with adaptive mesh refinement and un-refinement. The interaction of the pinball model with the mechanism of element failure and erosion is also revised.JRC.G.4-European laboratory for structural assessmen

    H2, fixed architecture, control design for large scale systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1990.Includes bibliographical references (p. 227-234).by Mathieu Mercadal.Ph.D

    Locally Consistent Parsing for Text Indexing in Small Space

    Full text link
    We consider two closely related problems of text indexing in a sub-linear working space. The first problem is the Sparse Suffix Tree (SST) construction of a set of suffixes BB using only O(B)O(|B|) words of space. The second problem is the Longest Common Extension (LCE) problem, where for some parameter 1τn1\le\tau\le n, the goal is to construct a data structure that uses O(nτ)O(\frac {n}{\tau}) words of space and can compute the longest common prefix length of any pair of suffixes. We show how to use ideas based on the Locally Consistent Parsing technique, that was introduced by Sahinalp and Vishkin [STOC '94], in some non-trivial ways in order to improve the known results for the above problems. We introduce new Las-Vegas and deterministic algorithms for both problems. We introduce the first Las-Vegas SST construction algorithm that takes O(n)O(n) time. This is an improvement over the last result of Gawrychowski and Kociumaka [SODA '17] who obtained O(n)O(n) time for Monte-Carlo algorithm, and O(nlogB)O(n\sqrt{\log |B|}) time for Las-Vegas algorithm. In addition, we introduce a randomized Las-Vegas construction for an LCE data structure that can be constructed in linear time and answers queries in O(τ)O(\tau) time. For the deterministic algorithms, we introduce an SST construction algorithm that takes O(nlognB)O(n\log \frac{n}{|B|}) time (for B=Ω(logn)|B|=\Omega(\log n)). This is the first almost linear time, O(npolylogn)O(n\cdot poly\log{n}), deterministic SST construction algorithm, where all previous algorithms take at least Ω(min{nB,n2B})\Omega\left(\min\{n|B|,\frac{n^2}{|B|}\}\right) time. For the LCE problem, we introduce a data structure that answers LCE queries in O(τlogn)O(\tau\sqrt{\log^*n}) time, with O(nlogτ)O(n\log\tau) construction time (for τ=O(nlogn)\tau=O(\frac{n}{\log n})). This data structure improves both query time and construction time upon the results of Tanimura et al. [CPM '16].Comment: Extended abstract to appear is SODA 202
    corecore