11 research outputs found

    Early component-based reliability assessment using UML based software models

    Get PDF
    In the last decade, software has grown in complexity and size, while development timelines have diminished. As a result, component-based software engineering is becoming routine. Component-based software reliability assessment combines the architecture of the system with the reliability of the components to obtain the system reliability. This allows developers to produce a reliable system and testers to focus on the vulnerable areas.;This thesis discusses a tool developed to implement the methodology previously created for early reliability assessment of component-based systems. The tool, Early Component-based Reliability Assessment (ECRA), uses Rational Rose Unified Modeling Language (UML) diagrams to predict the reliability of component-based software. ECRA provides the user with an easy interface to annotate the UML diagrams and uses a Bayesian algorithm to predict the system reliability. This thesis presents the methodology of ECRA, the steps taken to develop it, and its applications

    APPROACH FOR ENHANCING THE RELIABILITY OF SOFTWARE

    Get PDF
    Reliability is always important in all systems but sometimes it is more important than other quality attributes, especially inmission critical systems where the severity of consequence resulting from failure is very high. Software reliabilityengineering is focused on comprehensive techniques for developing reliable software and for proper assessment andimprovement of reliability. Reliability metrics, models and measurements form an essential part of software reliabilityengineering process. Appropriate metrics, models and measurement techniques should be applied to produce reliablesoftware. Hence, it is the intention to develop some approaches to enhance the reliability of software by the analysis of thestructure of the software, execution scenario for various inputs and operational profile.Keywords: Software Reliability, Rate of occurrence of failure (ROCOF), Mean Time to Failure (MTTF),Hardware Reliability, Mean Time between Failure (MTBR)

    Reliability in open source software

    Get PDF
    Open Source Software is a component or an application whose source code is freely accessible and changeable by the users, subject to constraints expressed in a number of licensing modes. It implies a global alliance for developing quality software with quick bug fixing along with quick evolution of the software features. In the recent year tendency toward adoption of OSS in industrial projects has swiftly increased. Many commercial products use OSS in various fields such as embedded systems, web management systems, and mobile software’s. In addition to these, many OSSs are modified and adopted in software products. According to Netcarf survey more than 58% web servers are using an open source web server, Apache. The swift increase in the taking on of the open source technology is due to its availability, and affordability. Recent empirical research published by Forrester highlighted that although many European software companies have a clear OSS adoption strategy; there are fears and questions about the adoption. All these fears and concerns can be traced back to the quality and reliability of OSS. Reliability is one of the more important characteristics of software quality when considered for commercial use. It is defined as the probability of failure free operation of software for a specified period of time in a specified environment (IEEE Std. 1633-2008). While open source projects routinely provide information about community activity, number of developers and the number of users or downloads, this is not enough to convey information about reliability. Software reliability growth models (SRGM) are frequently used in the literature for the characterization of reliability in industrial software. These models assume that reliability grows after a defect has been detected and fixed. SRGM is a prominent class of software reliability models (SRM). SRM is a mathematical expression that specifies the general form of the software failure process as a function of factors such as fault introduction, fault removal, and the operational environment. Due to defect identification and removal the failure rate (failures per unit of time) of a software system generally decreases over time. Software reliability modeling is done to estimate the form of the curve of the failure rate by statistically estimating the parameters associated with the selected model. The purpose of this measure is twofold: 1) to estimate the extra test time required to meet a specified reliability objective and 2) to identify the expected reliability of the software after release (IEEE Std. 1633-2008). SRGM can be applied to guide the test board in their decision of whether to stop or continue the testing. These models are grouped into concave and S-Shaped models on the basis of assumption about cumulative failure occurrence pattern. The S-Shaped models assume that the occurrence pattern of cumulative number of failures is S-Shaped: initially the testers are not familiar with the product, then they become more familiar and hence there is a slow increase in fault removing. As the testers’ skills improve the rate of uncovering defects increases quickly and then levels off as the residual errors become more difficult to remove. In the concave shaped models the increase in failure intensity reaches a peak before a decrease in failure pattern is observed. Therefore the concave models indicate that the failure intensity is expected to decrease exponentially after a peak was reached. From exhaustive study of the literature I come across three research gaps: SRGM have widely been used for reliability characterization of closed source software (CSS), but 1) there is no universally applicable model that can be applied in all cases, 2) applicability of SRGM for OSS is unclear and 3) there is no agreement on how to select the best model among several alternative models, and no specific empirical methodologies have been proposed, especially for OSS. My PhD work mainly focuses on these three research gaps. In first step, focusing on the first research gap, I analyzed comparatively eight SRGM, including Musa Okumoto, Inflection S-Shaped, Geol Okumoto, Delayed S-Shaped, Logistic, Gompertz and Generalized Geol, in term of their fitting and prediction capabilities. These models have selected due to their wide spread use and they are the most representative in their category. For this study 38 failure datasets of 38 projects have been used. Among 38 projects, 6 were OSS and 32 were CSS. In 32 CSS datasets 22 were from testing phase and remaining 10 were from operational phase (i.e. field). The outcomes show that Musa Okumoto remains the best for CSS projects while Inflection S-Shaped and Gompertz remain best for OSS projects. Apart from that we observe that concave models outperform for CSS and S-Shaped outperform for OSS projects. In the second step, focusing on the second research gap, reliability growth of OSS projects was compared with that of CSS projects. For this purpose 25 OSS and 22 CSS projects were selected with related defect data. Eight SRGM were fitted to the defect data of selected projects and the reliability growth was analyzed with respect to fitted models. I found that the entire selected models fitted to OSS projects defect data in the same manner as that of CSS projects and hence it confirms that OSS projects reliability grows similarly to that of CSS projects. However, I observed that for OSS S-Shaped models outperform and for CSS concave shaped models outperform. To overcome the third research gap I proposed a method that selects the best SRGM among several alternative models for predicting the residuals of an OSS. The method helps the practitioners in deciding whether to adopt an OSS component, or not in a project. We test the method empirically by applying it to twenty one different releases of seven OSS projects. From the validation results it is clear that the method selects the best model 17 times out of 21. In the remaining four it selects the second best model

    Empirical analysis of the relationship between CC and SLOC in a large corpus of Java methods and C functions

    Get PDF
    Measuring the internal quality of source code is one of the traditional goals of making software development into an engineering discipline. Cyclomatic Complexity (CC) is an often used source code quality metric, next to Source Lines of Code (SLOC). However, the use of the CC metric is challenged by the repeated claim that CC is redundant with respect to SLOC due to strong linear correlation. We conducted an extensive literature study of the CC/SLOC correlation results. Next, we tested correlation on large Java (17.6 M methods) and C (6.3 M functions) corpora. Our results show that linear correlation between SLOC and CC is only moderate as caused by increasingly high variance. We further observe that aggregating CC and SLOC as well as performing a power transform improves the correlation. Our conclusion is that the observed linear correlation between CC and SLOC of Java methods or C functions is not strong enough to conclude that CC is redundant with SLOC. This conclusion contradicts earlier claims from literature, but concurs with the widely accepted practice of measuring of CC next to SLOC

    Empirical analysis of the relationship between CC and SLOC in a large corpus of Java methods and C functions

    Get PDF
    Measuring the internal quality of source code is one of the traditional goals of making software development into an engineering discipline. Cyclomatic complexity (CC) is an often used source code quality metric, next to source lines of code (SLOC). However, the use of the CC metric is challenged by the repeated claim that CC is redundant with respect to SLOC because of strong linear correlation.We conducted an extensive literature study of the CC/SLOC correlation results. Next, we tested correlation on large Java (17.6 M methods) and C (6.3 M functions) corpora. Our results show that linear correlation between SLOC and CC is only moderate as a result of increasingly high variance. We further observe that aggregating CC and SLOC as well as performing a power transform improves the correlation.Our conclusion is that the observed linear correlation between CC and SLOC of Java methods or C functions is not strong enough to conclude that CC is redundant with SLOC. This conclusion contradicts earlier claims from literature but concurs with the widely accepted practice of measuring of CC next to SLOC

    Design and implementation of a World Wide Web conference information system

    Get PDF
    The Asilomar Conference on Signals, Systems and Computers is a technical conference dealing in signal and image processing, communications, sensor systems, and computer hardware and software. Sponsored by the Naval Postgraduate School and San Jose State University, in cooperation with the IEEE Signal Processing Society, the conference is held annually at the Asilomar Conference Facility in Pacific Grove, California. Although the Asilomar Conference is oriented toward computers and new technology, it has yet to exploit the full capabilities of the Internet. The purpose of this thesis is to: (a) Analyze the processes involved in the Asilomar Conference on Signals, Systems, & Computers, (b) Improve the article submission and review process, (c) Outline a target information system, (d) Implement a portion of the target system. Two major portions of the target system are implemented using an IBM compatible PC: (1) the ability for authors to submit abstracts and summaries via the Internet, (2) to allow conference administrators to manage the database via the Internet. Dynamic World Wide Web pages are created using Borland Delphi as the programming base, O'Rielly's WebSite as the web server, and two Common Gateway Interface elements for Delphi recently developed by Ann Lynnworth of HREF Tools Corp. The portions implemented lay the foundation for a system that could revolutionize the way conferences are conducted by unleashing the power of the Internethttp://archive.org/details/designimplementa00chalU.S. Navy (U.S.N.) authors.Lieutenant, United States NavyApproved for public release; distribution is unlimited

    Testing driven by operational profiles

    Get PDF

    A Software Reliability Model Combining Representative and Directed Testing

    Get PDF
    Traditionally, software reliability models have required that failure data be gathered using only representative testing methods. Over time, however, representative testing becomes inherently less effective as a means of improving the actual quality of the software under test. Additionally, the use of failure data based on observations made during representative testing has been criticized because of the statistical noise inherent in this type of data. In this dissertation, a testing method is proposed to make reliability testing more efficient and accurate. Representative testing is used early, when the rate of fault revelation is high. Directed testing is used later in testing to take advantage of its faster rate of fault detection. To make use of the test data from this mixed method approach to testing, a software reliability model is developed that permits reliability estimates to be made regardless of the testing method used to gather failure data. The key to being able to combine data from both representative testing and directed testing is shifting the random variable used by the model from observed interfailure times to a postmortem analysis of the debugged faults and using order statistics to combine the observed failure rates of faults no matter how those faults were detected. This shift from interfailure times removes the statistical noise associated with the use of this measure, which should allow models to provide more accurate estimates and predictions. Several experiments were conducted during the course of this research. The results from these experiments show that using the mixed method approach to testing with the new model provides reliability estimates that are at least as good as estimates from existing models under representative testing, while requiring fewer test cases. The results of this work also show that the high level of noise present in failure data based on observed failure times makes it very difficult for models that use this type of data to make accurate reliability estimates. These findings support the suggested move to the use of more stable quantities for reliability estimation and prediction

    Software reliability engineering process

    No full text
    corecore