744,903 research outputs found

    What accuracy statistics really measure

    Get PDF
    Provides the software estimation research community with a better understanding of the meaning of, and relationship between, two statistics that are often used to assess the accuracy of predictive models: the mean magnitude relative error (MMRE) and the number of predictions within 25% of the actual, pred(25). It is demonstrated that MMRE and pred(25) are, respectively, measures of the spread and the kurtosis of the variable z, where z=estimate/actual. Thus, z is considered to be a measure of accuracy, and statistics such as MMRE and pred(25) to be measures of properties of the distribution of z. It is suggested that measures of the central location and skewness of z, as well as measures of spread and kurtosis, are necessary. Furthermore, since the distribution of z is non-normal, non-parametric measures of these properties may be needed. For this reason, box-plots of z are useful alternatives to simple summary metrics. It is also noted that the simple residuals are better behaved than the z variable, and could also be used as the basis for comparing prediction system

    GraphCrunch: A tool for large network analyses

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The recent explosion in biological and other real-world network data has created the need for improved tools for large network analyses. In addition to well established <it>global </it>network properties, several new mathematical techniques for analyzing <it>local </it>structural properties of large networks have been developed. Small over-represented subgraphs, called network <it>motifs</it>, have been introduced to identify simple building blocks of complex networks. Small induced subgraphs, called <it>graphlets</it>, have been used to develop "network signatures" that summarize network topologies. Based on these network signatures, two new highly sensitive measures of network local structural similarities were designed: the <it>relative graphlet frequency distance </it>(<it>RGF-distance</it>) and the <it>graphlet degree distribution agreement </it>(<it>GDD-agreement</it>).</p> <p>Finding adequate null-models for biological networks is important in many research domains. Network properties are used to assess the fit of network models to the data. Various network models have been proposed. To date, there does not exist a software tool that measures the above mentioned local network properties. Moreover, none of the existing tools compare real-world networks against a series of network models with respect to these local as well as a multitude of global network properties.</p> <p>Results</p> <p>Thus, we introduce GraphCrunch, a software tool that finds well-fitting network models by comparing large real-world networks against random graph models according to various network structural similarity measures. It has unique capabilities of finding computationally expensive RGF-distance and GDD-agreement measures. In addition, it computes several standard global network measures and thus supports the largest variety of network measures thus far. Also, it is the first software tool that compares real-world networks against a series of network models and that has built-in parallel computing capabilities allowing for a user specified list of machines on which to perform compute intensive searches for local network properties. Furthermore, GraphCrunch is easily extendible to include additional network measures and models.</p> <p>Conclusion</p> <p>GraphCrunch is a software tool that implements the latest research on biological network models and properties: it compares real-world networks against a series of random graph models with respect to a multitude of local and global network properties. We present GraphCrunch as a comprehensive, parallelizable, and easily extendible software tool for analyzing and modeling large biological networks. The software is open-source and freely available at <url>http://www.ics.uci.edu/~bio-nets/graphcrunch/</url>. It runs under Linux, MacOS, and Windows Cygwin. In addition, it has an easy to use on-line web user interface that is available from the above web page.</p

    Phases, many-body entropy measures and coherence of interacting bosons in optical lattices

    Get PDF
    Already a few bosons with contact interparticle interactions in small optical lattices feature a variety of quantum phases: superfluid, Mott-insulator and fermionized Tonks gases can be probed in such systems. To detect these phases -- pivotal for both experiment and theory -- as well as their many-body properties we analyze several distinct measures for the one-body and many-body Shannon information entropies. We exemplify the connection of these entropies with spatial correlations in the many-body state by contrasting them to the Glauber normalized correlation functions. To obtain the ground-state for lattices with commensurate filling (i.e. an integer number of particles per site) for the full range of repulsive interparticle interactions we utilize the multiconfigurational time-dependent Hartree method for bosons (MCTDHB) in order to solve the many-boson Schr\"odinger equation. We demonstrate that all emergent phases -- the superfluid, the Mott insulator, and the fermionized gas can be characterized equivalently by our many-body entropy measures and by Glauber's normalized correlation functions. In contrast to our many-body entropy measures, single-particle entropy cannot capture these transitions.Comment: 11 pages, 7 figures, software available at http://ultracold.or

    Multi-response optimization of CO2 laser welding process of austenitic stainless steel

    Get PDF
    Recently, laser welding of austenitic stainless steel has received great attention in industry, due to its wide spread application in petroleum refinement stations, power plant, pharmaceutical industry and households. Therefore, mechanical properties should be controlled to obtain good welded joints. The welding process should be optimized by the proper mathematical models. In this research, the tensile strength and impact strength along with the joint operating cost of laser welded butt joints made of AISI304 was investigated. Design-expert software was used to establish the design matrix and to analyze the experimental data. The relationships between the laser welding parameters (laser power, welding speed and focal point position) and the three responses (tensile strength, impact strength and joint operating cost) were established. Also, the optimization capabilities in design-expert software were used to optimise the welding process. The developed mathematical models were tested for adequacy using analysis of variance and other adequacy measures. In this investigation the optimal welding conditions were identified in order to increase the productivity and minimize the total operating cost. Overlay graphs were plotted by superimposing the contours for the various response surfaces. The process parameters effect was determined and the optimal welding combinations were tabulated

    Some conservative stopping rules for the operational testing of safety-critical software

    Get PDF
    Operational testing, which aims to generate sequences of test cases with the same statistical properties as those that would be experienced in real operational use, can be used to obtain quantitative measures of the reliability of software. In the case of safety critical software it is common to demand that all known faults are removed. This means that if there is a failure during the operational testing, the offending fault must be identified and removed. Thus an operational test for safety critical software takes the form of a specified number of test cases (or a specified period of working) that must be executed failure-free. This paper addresses the problem of specifying the numbers of test cases (or time periods) required for a test, when the previous test has terminated as a result of a failure. It has been proposed that, after the obligatory fix of the offending fault, the software should be treated as if it were completely novel, and be required to pass exactly the same test as originally specified. The reasoning here claims to be conservative, inasmuch as no credit is given for any previous failure-free operation prior to the failure that terminated the test. We show that, in fact, this is not a conservative approach in all cases, and propose instead some new Bayesian stopping rules. We show that the degree of conservatism in stopping rules depends upon the precise way in which the reliability requirement is expressed. We define a particular form of conservatism that seems desirable on intuitive grounds, and show that the stopping rules that exhibit this conservatism are also precisely the ones that seem preferable on other grounds

    Implementing structural measures over i* diagrams

    Get PDF
    Measuring is a key issue in any software-related activity. In the context of the i* framework, we are implementing Measufier, a prototype for measuring i* diagrams in terms of properties that may be derived from their structure (structural measures). The prototype works over i* diagrams represented by the iStarML interchange format, and provides some facilities for managing measures' catalogues, customizing the measures to the analyst needs, and computing the measure over particular diagrams.Peer ReviewedPostprint (published version

    Design-level Cohesion Measures: Derivation, Comparison, and Applications

    Get PDF
    Cohesion was first introduced as a software attribute that could be used to predict properties of implementations that would be created from a given design. Unfortunately, cohesion, as originally defined, could not be objectively assessed, while more recently developed objective cohesion measures depend on code-level information. We show that association-based and slice-based approaches can be used to measure cohesion using only design-level information. Our design-level cohesion measures are formally defined, can be readily implemented, and can support software design, maintenance, and restructuring. Index terms --- cohesion, software measurement and metrics, software design, software maintenance, software restructuring and re-engineering, software visualization, software reuse. 1 Introduction Module cohesion was defined by Yourdan and Constantine as &quot;how tightly bound or related its internal elements are to one another&quot;[10, p. 106]. They describe cohesion as an attribute of design..
    • 

    corecore