12 research outputs found

    Predicting Software Reliability Using Ant Colony Optimization Technique with Travelling Salesman Problem for Software Process – A Literature Survey

    Get PDF
    Computer software has become an essential and important foundation in several versatile domains including medicine, engineering, etc. Consequently, with such widespread application of software, there is a need of ensuring software reliability and quality. In order to measure such software reliability and quality, one must wait until the software is implemented, tested and put for usage for a certain time period. Several software metrics have been proposed in the literature to avoid this lengthy and costly process, and they proved to be a good means of estimating software reliability. For this purpose, software reliability prediction models are built. Software reliability is one of the important software quality features. Software reliability is defined as the probability with which the software will operate without any failure for a specific period of time in a specified environment. Software reliability, when estimated in early phases of software development life cycle, saves lot of money and time as it prevents spending huge amount of money on fixing of defects in the software after it has been deployed to the client. Software reliability prediction is very challenging in starting phases of life cycle model. Software reliability estimation has thus become an important research area as every organization aims to produce reliable software, with good quality and error or defect free software. There are many software reliability growth models that are used to assess or predict the reliability of the software. These models help in developing robust and fault tolerant systems. In the past few years many software reliability models have been proposed for assessing reliability of software but developing accurate reliability prediction models is difficult due to the recurrent or frequent changes in data in the domain of software engineering. As a result, the software reliability prediction models built on one dataset show a significant decrease in their accuracy when they are used with new data. The main aim of this paper is to introduce a new approach that optimizes the accuracy of software reliability predictive models when used with raw data. Ant Colony Optimization Technique (ACOT) is proposed to predict software reliability based on data collected from literature. An ant colony system by combining with Travelling Sales Problem (TSP) algorithm has been used, which has been changed by implementing different algorithms and extra functionality, in an attempt to achieve better software reliability results with new data for software process. The intellectual behavior of the ant colony framework by means of a colony of cooperating artificial ants are resulting in very promising results. Keywords: Software Reliability, Reliability predictive Models, Bio-inspired Computing, Ant Colony Optimization technique, Ant Colon

    Discrimination Analysis for Predicting Defect-Prone Software Modules

    Get PDF
    Software defect prediction studies usually build models without analyzing the data used in the procedure. As a result, the same approach has different performances on different data sets. In this paper, we introduce discrimination analysis for providing a good method to give insight into the inherent property of the software data. Based on the analysis, we find that the data sets used in this field have nonlinearly separable and class-imbalanced problems. Unlike the prior works, we try to exploit the kernel method to nonlinearly map the data into a high-dimensional feature space. By combating these two problems, we propose an algorithm based on kernel discrimination analysis called KDC to build more effective prediction model. Experimental results on the data sets from different organizations indicate that KDC is more accurate in terms of F-measure than the state-of-the-art methods. We are optimistic that our discrimination analysis method can guide more studies on data structure, which may derive useful knowledge from data science for building more accurate prediction models

    Architectural level risk assessment

    Get PDF
    Many companies develop and maintain large-scale software systems for public and financial institutions. Should a failure occur in one of these systems, the impact would be enormous. It is therefore essential, in maintaining a system\u27s quality, to identify any defects early on in the development process in order to prevent the occurrence of failures. However, testing all modules of these systems to identify defects can be very expensive. There is therefore a need for methodologies and tools that support software engineers in identifying the defected and complex software components early on in the development process.;Risk assessment is an essential process for ensuring high quality software products. By performing risk assessment during the early software development phases we can identify complex modules, thus enables us to enhance resource allocation decisions.;To assess the risk of software systems early on in the software\u27s life cycle, we propose an architectural level risk assessment methodology. It uses UML specifications of software systems which are available early on in the software life cycle. It combines the probability of software failures and the severity associated with these failures to estimate software risk factors of software architectural elements (components/connectors), the scenarios, the use cases and systems. As a result, remedial actions to control and improve the quality of the software product can be taken.;We build a risk assessment model which will enable us to identify complex and noncomplex software components. We will be able to estimate programming and service effort, and estimate testing effort. This model will enable us also to identify components with high risk factor which would require the development of effective fault tolerant mechanisms.;To estimate the probability of software failure we introduced and developed a set of dynamic metrics which are used to measure dynamic of software architectural elements from UML static models.;To estimate severity of software failure we propose UML based severity methodology. Also we propose a validation process for both risk and severity methodologies. Finally we propose prototype tool support for the automation of the risk assessment methodology

    Analysis of costs and delivery intervals for multiple-release software

    Get PDF
    Project managers of large software projects, and particularly those associated with Internet Business-to-Business (B2B) or Business-to-Customer (B2C) applications, are under pressure to capture market share by delivering reliable software with cost and timing constraints. An earlier delivery time may help the E-commerce software capture a larger market share. However, early delivery sometimes means lower quality. In addition, most of the time the scale of the software is so large that incremental multiple releases are warranted. A Multiple-Release methodology has been developed to optimize the most efficient and effective delivery intervals of the various releases of software products, taking into consideration software costs and reliability. The Multiple-Release methodology extends existing software cost and reliability models, meets the needs of large software development firms, and gives a navigation guide to software industrial managers. The main decision factors for the multiple releases include the delivery interval of each release, the market value of the features in the release, and the software costs associated with testing and error penalties. The input of these factors was assessing using Design of Experiments (DOE). The costs included in the research are based on a salary survey of software staff at companies in the New Jersey area and on budgets of software development teams. The Activity Based Cost (ABC) method was used to determine costs on the basis of job functions associated with the development of the software. It is assumed that the error data behavior follows the Non-Homogeneous Poisson Processes (NHPP)

    Makine öğrenme algoritmaları kullanılarak yazılım hata kestiriminin iyileştirilmesi

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.YÖK tez kataloğunda erişimi mevcut değildir

    A Technique for Evaluating the Health Status of a Software Module Using Process Metrics

    Get PDF
    Identifying error-prone files in large software systems has been an area where significant attention has been paid over the years. In this thesis, we propose a process-metrics based method for predicting the health status of a file based on its commit profile in its GitHub repository. Precisely, for each file and each bug fixing commit a file participates, we compute a dependency score of the committed file with its other co-committed files. The calculated score is appropriately decayed if the file does not participate in the new bug-fixing commits in the system. By examining the trend of the dependency score for each file as time elapses, we try to deduce whether this file will be participating in a bug fixing commit in the immediately following commits. This approach has been evaluated in 21 medium to large open-source systems by correlating the dependency metric trend against the known outcome obtained from a data set we use as a gold standard

    Essays on exploitation and exploration in software development

    Get PDF
    Software development includes two types of activities: software improvement activities by correcting faults and software enhancement activities by adding new features. Based on organizational theory, we propose that these activities can be classified as implementation-oriented (exploitation) and innovation-oriented (exploration). In the context of open source software (OSS) development, developing a patch would be an example of an exploitation activity. Requesting a new software feature would be an example of an exploration activity. This dissertation consists of three essays which examine exploitation and exploration in software development. The first essay analyzes software patch development (exploitation) in the context of software vulnerabilities which could be exploited by hackers. There is a need for software vendors to make software patches available in a timely manner for vulnerabilities in their products. We develop a survival analysis model of the patch release behavior of software vendors based on a cost-based framework of software vendors. We test this model using a data set compiled from the National Vulnerability Database (NVD), United States Computer Emergency Readiness Team (US-CERT), and vendor web sites. Our results indicate that vulnerabilities with high confidentiality impact or high integrity impact are patched faster than vulnerabilities with high availability impact. Interesting differences in the patch release behavior of software vendors based on software type (new release vs. update) and type of vendor (open source vs. proprietary) are found. The second essay studies exploitation and exploration in the content of OSS development. We empirically examine the differences between exploitation (patch development) and exploration (feature request) networks of developers in OSS projects in terms of their social network structure, using a data set collected from the SourceForge database. We identify a new category of developers (ambidextrous developers) in OSS projects who contribute to patch development as well as feature request activities. Our results indicate that a patch development network has greater internal cohesion and network centrality than a feature request network. In contrast, a feature request network has greater external connectivity than a patch development network. The third essay explores ambidexterity and ambidextrous developers in the context of OSS project performance. Recent research on OSS development has studied the social network structure of software developers as a determinant of project success. However, this stream of research has focused on the project level, and has not recognized the fact that software projects could consist of different types of activities, each of which could require different types of expertise and network structures. We develop a theoretical construct for ambidexterity based on the concept of ambidextrous developers. We empirically illustrate the effects of ambidexterity and network characteristics on OSS project performance. Our results indicate that a moderate level of ambidexterity, external cohesion, and technological diversity are desirable for project success. Project success is also positively related to internal cohesion and network centrality. We illustrate the roles of ambidextrous developers on project performance and their differences compared to other developers
    corecore