13 research outputs found

    Predicting Software Suitability Using a Bayesian Belief Network

    Get PDF
    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts

    Calibration and Validation of the COCOMO II.1997.0 Cost/Schedule Estimating Model to the Space and Missile Systems Center Database

    Get PDF
    The goal of this study was to determine the accuracy of COCOMO II.1997.0, a software cost and schedule estimating model, using Magnitude of Relative Error, Mean Magnitude of Relative Error, Relative Root Mean Square, and a 25 percent Prediction Level. Effort estimates were completed using the model in default and in calibrated mode. Calibration was accomplished by dividing four stratified data sets into two random validation and calibration data sets using five times resampling. The accuracy results were poor; the best having an accuracy of only .3332 within 40 percent of the time in calibrated mode. It was found that homogeneous data is the key to producing the best results, and the model typically underestimates. The second part of this thesis was to try and improve upon the default mode estimates. This was accomplished by regressing the model estimates to the actual effort. Each original regression equation was transformed and tested for normality, equal variance, and significance. Overall, the results were promising; regression improved the accuracy in three of the four cases, the best having an accuracy of .2059 within 75 percent of the time

    Developing Secure Systems: A Comparative Study of Existing Methodologies

    Full text link
    With the increasing demand for developing high-quality and more reliable systems, the process of developing trustworthy computer software is a challenging one. In this paper, we review various approaches to producing more secure systems. This includes established general principles for designing secure systems. It also provides an introduction to general software quality measurements including existing software security metrics. This paper also includes a comparison of the various security metrics for developing secure systems (i.e., architectural, design, and code-level metrics). Lastly, the paper examines the approach of refactoring, illustrates its objectives, and shows how refactoring is generally used for enhancing the quality of existing programs from the perspective of information security. At the end of this paper, we provide a discussion of these three approaches and how they can be used to provide guidance for future secure software development processes

    The Second ICASE/LaRC Industry Roundtable: Session Proceedings

    Get PDF
    The second ICASE/LaRC Industry Roundtable was held October 7-9, 1996 at the Williamsburg Hospitality House, Williamsburg, Virginia. Like the first roundtable in 1994, this meeting had two objectives: (1) to expose ICASE and LaRC scientists to industrial research agendas; and (2) to acquaint industry with the capabilities and technology available at ICASE, LaRC and academic partners of ICASE. Nineteen sessions were held in three parallel tracks. Of the 170 participants, over one third were affiliated with various industries. Proceedings from the different sessions are summarized in this report

    The global vulnerability discovery and disclosure system: a thematic system dynamics approach

    Get PDF
    Vulnerabilities within software are the fundamental issue that provide both the means, and opportunity for malicious threat actors to compromise critical IT systems (Younis et al., 2016). Consequentially, the reduction of vulnerabilities within software should be of paramount importance, however, it is argued that software development practitioners have historically failed in reducing the risks associated with software vulnerabilities. This failure is illustrated in, and by the growth of software vulnerabilities over the past 20 years. This increase which is both unprecedented and unwelcome has led to an acknowledgement that novel and radical approaches to both understand the vulnerability discovery and disclosure system (VDDS) and to mitigate the risks associate with software vulnerability centred risk is needed (Bradbury, 2015; Marconato et al., 2012). The findings from this research show that whilst technological mitigations are vital, the social and economic features of the VDDS are of critical importance. For example, hitherto unknown systemic themes identified by this research are of key and include; Perception of Punishment; Vendor Interactions; Disclosure Stance; Ethical Considerations; Economic factors for Discovery and Disclosure and Emergence of New Vulnerability Markets. Each theme uniquely impacts the system, and ultimately the scale of vulnerability based risks. Within the research each theme within the VDDS is represented by several key variables which interact and shape the system. Specifically: Vender Sentiment; Vulnerability Removal Rate; Time to fix; Market Share; Participants within VDDS, Full and Coordinated Disclosure Ratio and Participant Activity. Each variable is quantified and explored, defining both the parameter space and progression over time. These variables are utilised within a system dynamic model to simulate differing policy strategies and assess the impact of these policies upon the VDDS. Three simulated vulnerability disclosure futures are hypothesised and are presented, characterised as depletion, steady and exponential with each scenario dependent upon the parameter space within the key variables

    A technique for the evaluation of free and open source e-learning systems

    Get PDF
    Philosophiae Doctor - PhDEvaluating software is a universal and complex problem. The question is: how should software be selected and adopted, or rather, which of the software packages is the most suitable for a specific environment? Extensive research on the evaluation of software has been done, but only a few researchers have considered evaluation of e-learning systems based on three software quality characteristics (i.e. usability, maintainability and deployability) for implementation in third world countries. In this thesis, it will be considered how to use a mixed research methods for the evaluation of free and open source e-learning systems in a developing country. The scope of this investigation is the evaluation of two free and open source e-learning systems at the Open University of Tanzania using 33 stakeholders (some with more and others with less computer expertise).South Afric

    Developments in Dataflow Programming

    Get PDF
    Dataflow has historically been motivated either by parallelism or programmability or some combination of the two. This work, rather than being directed primarily at parallelism or programmability, is instead aimed at maximising the overall utility to the programmer of the system at large. This means that it aims to result in a system in which it is easy to create well-constructed, flexible programs that comply with the principles of software engineering and architecture, but also that the proposed system should be capable at performing practical real-life tasks and should be as widely applicable as can be achieved. With those aims in mind, this project has four goals: * to argue for a unified global dataflow coordination system, extensible to be able to accommodate components of any form that may exist now or in the future; * to establish a link between the design of such a system and the principles of software engineering and architecture; * to design a dataflow coordination system based on those principles, aiming where possible to embed them in the design so that they become easy or unthinking for programmers to apply; and * to implement and test components of the proposed system, using it to build a set of three sample algorithms. Taking the best ideas that have been proposed in dataflow programming in the past --- those that most effectively embed the principles of software engineering --- and extending them with new proposals where necessary, a collection of interactions and functionalities is proposed, including a novel way of using partial evaluation of functions and data dimensionality to represent iteration in an acyclic graph. The proposed design was implemented as far as necessary to construct three test algorithms: calculating a factorial, generating terms of the Fibonacci sequence and performing a merge-sort. The implementation was successful in representing iteration in acyclic dataflow, and the test algorithms generated correct results, limited only by the numerical representation capabilities of the underlying language. Testing and working with the implemented system revealed the importance to usability of the system being visual, interactive and, in a distributed environment, always-available. Proposed further work falls into three categories: writing a full specification (in particular, defining the interfaces by which components will interact); developing new features to extend the functionality; and further developing the test implementation. The conclusion summarises the vision of a unified global dataflow coordination system and makes an appeal for cooperation on its development as an open, non-profit dataflow system run for the good of its community, rather than allowing a proliferation of competing systems run for commercial gain
    corecore