343,464 research outputs found

    Simulation of Reliability of Software Component

    Get PDF
    Component-Based Software Engineering (CBSE) is increasingly being accepted worldwide for software development, in most of the industries. Software reliability is defined as the probability that a software system operates with no failure within a specified time on specified operating conditions. Software component reliability and failure intensity are two important parameters that Estimates the reliability of system after integration of component. The estimation of reliability of software can save loss of time, life and cost. In this paper, software reliability has been estimated by analyzing the failure data. The Imperfect Software Reliability Growth Models (SRGMs) model have been used for simulating the software reliability by estimating the number of remaining faults and the model parameters of the fault content rate function. We aim for simulating software reliability by connecting the imperfect debugging and Goel-Okumoto model. The estimation of reliability gives the time of stopping the unending testing of that component or time of release of software component

    Towards Constructive Cost Analysis for demand based Reusable Domain Specific Components

    Get PDF
    The prevailing software development methodology embraced by a majority of organisations is characterised by its agility. The neglect of normal analysis and design procedures is a consequence of the significant pressures associated with designing a product within specified time and budget constraints. This phenomenon could potentially result in a death of software of superior quality, while simultaneously impeding the constructive reuse of components. In the majority of component approaches, the demand of domain specific sofwatre components occurs during the later stages. In this paper, various components can be identified as demand based reusbale domain specific software components, which might also help in reusing these components in the subsequent increments. The strategy for extraction of components & procedure for reusing the existing components is described and a sample case to realize the same is presented.Still there is a dire need to early identify  the demand based domain specific sofwatre components  and perform the constructive cost analysis for the reusable  domain specific software components. The issues related to the estimation of cost reuse measures are still challenging. This paper presents the constructuve cost analysis for the demand based reusable domain specific sofwtare components and proposes reuse measures for the family of applications with the quantized values. By analyzing these cost measures, the budget and effort in the development can be reduced. The results are estimated from the HR Portal domain specific softwrae application as a case study and its respective scenario has been explored in a better manner

    Improving Loss Estimation for Woodframe Buildings. Volume 2: Appendices

    Get PDF
    This report documents Tasks 4.1 and 4.5 of the CUREE-Caltech Woodframe Project. It presents a theoretical and empirical methodology for creating probabilistic relationships between seismic shaking severity and physical damage and loss for buildings in general, and for woodframe buildings in particular. The methodology, called assembly-based vulnerability (ABV), is illustrated for 19 specific woodframe buildings of varying ages, sizes, configuration, quality of construction, and retrofit and redesign conditions. The study employs variations on four basic floorplans, called index buildings. These include a small house and a large house, a townhouse and an apartment building. The resulting seismic vulnerability functions give the probability distribution of repair cost as a function of instrumental ground-motion severity. These vulnerability functions are useful by themselves, and are also transformed to seismic fragility functions compatible with the HAZUS software. The methods and data employed here use well-accepted structural engineering techniques, laboratory test data and computer programs produced by Element 1 of the CUREE-Caltech Woodframe Project, other recently published research, and standard construction cost-estimating methods. While based on such well established principles, this report represents a substantially new contribution to the field of earthquake loss estimation. Its methodology is notable in that it calculates detailed structural response using nonlinear time-history structural analysis as opposed to the simplifying assumptions required by nonlinear pushover methods. It models physical damage at the level of individual building assemblies such as individual windows, segments of wall, etc., for which detailed laboratory testing is available, as opposed to two or three broad component categories that cannot be directly tested. And it explicitly models uncertainty in ground motion, structural response, component damageability, and contractor costs. Consequently, a very detailed, verifiable, probabilistic picture of physical performance and repair cost is produced, capable of informing a variety of decisions regarding seismic retrofit, code development, code enforcement, performance-based design for above-code applications, and insurance practices

    A generic model for software size estimation based on component partitioning : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Software Engineering

    Get PDF
    Software size estimation is a central but under-researched area of software engineering economics. Most current cost estimation models use an estimated end-product size, in lines of code, as one of their most important input parameters. Software size, in a different sense, is also important for comparative productivity studies, often using a derived size measure, such as function points. The research reported in this thesis is an investigation into software size estimation and the calibration of derived software size measures with each other and with product size measures. A critical review of current software size metrics is presented together with a classification of these metrics into textual metrics, object counts, vector metrics and composite metrics. Within a review of current approaches to software size estimation, that includes a detailed analysis of Function Point Analysis-like approaches, a new classification of software size estimation methods is presented which is based on the type of structural partitioning of a specification or design that must be completed before the method can be used. This classification clearly reveals a number of fundamental concepts inherent in current size estimation methods. Traditional classifications of size estimation approaches are also discussed in relation to the new classification. A generic decomposition and summation model for software sizing is presented. Systems are classified into different categories and, within each category, into appropriate component type partitions. Each component type has a different size estimation algorithm based on size drivers appropriate to that particular type. Component size estimates are summed to produce partial or total system size estimates, as required. The model can be regarded as a generalization of a number of Function Point Analysis-like methods in current use. Provision is made for both comparative productivity studies using derived size measures, such as function points, and for end product size estimates using primitive size measures, such as lines of code. The nature and importance of calibration of derived measures for comparative studies is developed. System adjustment factors are also examined and a model for their analysis and application presented. The model overcomes most of the recent criticisms that have been levelled at Function Point Analysis-like methods. A model instance derived from the generic sizing model is applied to a major case study of a system of administrative applications in which a new Function Point Analysis-type metric suited to a particular software development technology is derived, calibrated and compared with Function Point Analysis. The comparison reveals much of the anatomy of Function Point Analysis and its many deficiencies when applied to this case study. The model instance is at least partially validated by application to a sample of components from later incremental developments within the same software development technology. The performance of the model instance for this technology is very good in its own right and also very much better than Function Point Analysis. The model is also applied to three other business software development technologies using the IFIP 1 International Federation for Information Processing standard inventory control and purchasing reference system. The purpose of this study is to demonstrate the applicability of the generic model to several quite different software technologies. Again, the three derived model instances show an excellent fit to the available data. This research shows that a software size estimation model which takes explicit advantage of the particular characteristics of the software technology used can give better size estimates than methods that do not take into account the component partitions that are characteristic of the software technology employed

    Technology Benefit Estimator (T/BEST): User's Manual

    Get PDF
    The Technology Benefit Estimator (T/BEST) system is a formal method to assess advanced technologies and quantify the benefit contributions for prioritization. T/BEST may be used to provide guidelines to identify and prioritize high payoff research areas, help manage research and limited resources, show the link between advanced concepts and the bottom line, i.e., accrued benefit and value, and to communicate credibly the benefits of research. The T/BEST software computer program is specifically designed to estimating benefits, and benefit sensitivities, of introducing new technologies into existing propulsion systems. Key engine cycle, structural, fluid, mission and cost analysis modules are used to provide a framework for interfacing with advanced technologies. An open-ended, modular approach is used to allow for modification and addition of both key and advanced technology modules. T/BEST has a hierarchical framework that yields varying levels of benefit estimation accuracy that are dependent on the degree of input detail available. This hierarchical feature permits rapid estimation of technology benefits even when the technology is at the conceptual stage. As knowledge of the technology details increases the accuracy of the benefit analysis increases. Included in T/BEST's framework are correlations developed from a statistical data base that is relied upon if there is insufficient information given in a particular area, e.g., fuel capacity or aircraft landing weight. Statistical predictions are not required if these data are specified in the mission requirements. The engine cycle, structural fluid, cost, noise, and emissions analyses interact with the default or user material and component libraries to yield estimates of specific global benefits: range, speed, thrust, capacity, component life, noise, emissions, specific fuel consumption, component and engine weights, pre-certification test, mission performance engine cost, direct operating cost, life cycle cost, manufacturing cost, development cost, risk, and development time. Currently, T/BEST operates on stand-alone or networked workstations, and uses a UNIX shell or script to control the operation of interfaced FORTRAN based analyses. T/BEST's interface structure works equally well with non-FORTRAN or mixed software analysis. This interface structure is designed to maintain the integrity of the expert's analyses by interfacing with expert's existing input and output files. Parameter input and output data (e.g., number of blades, hub diameters, etc.) are passed via T/BEST's neutral file, while copious data (e.g., finite element models, profiles, etc.) are passed via file pointers that point to the expert's analyses output files. In order to make the communications between the T/BEST's neutral file and attached analyses codes simple, only two software commands, PUT and GET, are required. This simplicity permits easy access to all input and output variables contained within the neutral file. Both public domain and proprietary analyses codes may be attached with a minimal amount of effort, while maintaining full data and analysis integrity, and security. T/BESt's sotware framework, status, beginner-to-expert operation, interface architecture, analysis module addition, and key analysis modules are discussed. Representative examples of T/BEST benefit analyses are shown
    • 

    corecore