73 research outputs found

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods \u2013like the IFPUG Function Point Analysis and COSMIC methods\u2013 are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications\u2018 sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods –like the IFPUG Function Point Analysis and COSMIC methods– are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications‘ sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results

    Software Development and Detector Characterization of the EUCLID Near-Infrared Spectro-Photometer

    Get PDF
    The Euclid space mission, approved by the European Space Agency, is planned to perform an extensive survey over a 6 years period, beginning end of 2020. The satellite will be equipped with two instruments, a visible imager and a near-infrared spectro-photometer (NISP). These instruments will allow to measure the shape and redshift of galaxies over a large fraction of the extragalactic sky in order to study the evolution of cosmic structures, the accelerated expansion of the Universe and the nature of dark matter. This thesis has been carried out in the context of the INFN team participating in Euclid. I have contributed to the development of a software simulating the Euclid Spacecraft commanding and responding towards the NISP Instrument Control Unit. By this simulator the testing and validation of the functionalities of the Control Unit Application Software are made possible. My PhD activity abroad (6 months) was done at the CPPM Lab in Marseille collaborating with the local group in charge of the characterization of NISP infrared detectors. I took part in data acquisition shifts during calibration campaigns and I carried out an analysis on infrared detector dark current's dependence on temperature. By this analysis it was proved that the dark current of infrared detectors is compliant with Euclid requirements and that its behaviour in the range of Euclid operation temperatures is well understood

    Estimation model for software testing

    Get PDF
    Testing of software applications and assurance of compliance have become an essential part of Information Technology (IT) governance of organizations. Over the years, software testing has evolved into a specialization with its own practices and body of knowledge. Test estimation consists of the estimation of effort and working out the cost for a particular level of testing, using various methods, tools, and techniques. An incorrect estimation often leads to inadequate amount of testing which, in turn, can lead to failures of software systems when they are deployed in organizations. This research work has first established the state of the art of software test estimation, followed by the proposal of a Unified Framework for Software Test Estimation. Using this framework, a number of detailed estimation models have been designed next for functional testing. The ISBSG database has been used to investigate the estimation of software testing. The analysis of the ISBSG data has revealed three test productivity patterns representing economies and diseconomies of scale, based on which the characteristics of the corresponding projects were investigated. The three project groups related to the three productivity patterns were found to be statistically significant, and characterised by application domain, team size, elapsed time, and rigour of verification and validation throughout development. Within each project group, the variations in test efforts could be explained by the activities carried out during the development and processes adopted for testing, in addition to functional size. Two new independent variables, the quality of the development processes (DevQ) and the quality of testing processes (TestQ), were identified as influential in the estimation models. Portfolios of estimation models were built for different data sets using combinations of the three independent variables. At estimation time, an estimator could choose the project group by mapping the characteristics of the project to be estimated to the attributes of the project group, in order to choose the model closest to it. The quality of each model has been evaluated using established criteria such as R2, Adj R2, MRE, MedMRE and Maslow’s Cp. Models have been compared using their predictive performance, adopting new criteria proposed in this research work. Test estimation models using functional size measured in COSMIC Function Points have exhibited better quality and resulted in more accurate estimation, compared to functional size measured in IFPUG Function Points. A prototype software is now developed using statistical “R” programming language, incorporating portfolios of estimation models. This test estimation tool can be used by industry and academia for estimating test efforts

    NASA Tech Briefs, December 1994

    Get PDF
    Topics: Test and Measurement; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication; Mathematics and Information Sciences; Life Sciences; Books and Report

    NASA SBIR abstracts of 1991 phase 1 projects

    Get PDF
    The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included

    Operating, testing and evaluating hybridized silicon P-I-N arrays

    Get PDF
    Use of CCD detector arrays as visible imagers in space telescopes has been problematic. Charge-coupled devices rapidly deteriorate due to damage from the high radiation environment of space. CMOS-based imagers, which do not transfer charge, offer an alternative technology that is more tolerant of a high-radiation environment. This dissertation evaluates the performance of four pathfinder 1K by 1K hybridized silicon P-I-N detector arrays made by Raytheon under subcontract to RIT as candidates for use in a space telescope application. Silicon P-I-N arrays have photon capture properties similar to back-thinned CCD\u27s and should be far more robust than CCD\u27s in the high-radiation environment of space. The first two devices, 180 ”m thick prototypes, demonstrate crisp imaging with lateral diffusion of 5 microns at 35 Kelvin. The nodal capacitance is estimated to be 41 fF and the quantum efficiency is remarkably good (typically \u3e 0.75) over a spectral range from 410 to 940 nm. A second pair of devices, fabricated with detectors thinned to 40 ”m, exhibits similar performance but with blue-enhanced spectral response from an improved anti-reflective coating. Operating, testing, and evaluating imaging devices similar to the ones tested here is also problematic. Precise, low-noise, flexible control systems are required to operate the devices, and interpretation of the data is not always straightforward. In the process of evaluating these pathfinder devices, this dissertation surveys and advances systems engineering and analysis (i.e. the application of linear and stochastic system theory) generally useful for operating and evaluating similar hybridized staring focal plane arrays. Most significantly, a previously unaccounted for effect causing significant errors in the measurement of quantum efficiency - inter-pixel capacitive coupling - is discovered, described, measured, and compensated for in the P-I-N devices. This coupling is also shown to be measurably present in hybridized indium antimonide arrays. Simulations of interpixel coupling are also performed and predict the coupling actually observed in the P-I-N devices. Additional analysis tools for characterizing these devices are developed. An optimal estimator of signal on a multiply-sampled integrating detector in the presence of both photon and read noise is derived, modeling a pixel as a simple linear system, and is shown to agree with known limiting cases. Theories of charge diffusion in detectors are surveyed and a system model based on the steady state diffusion equation, infinite lifetime, and contiguous pixels is derived and compared to other models. Simulations validate this theory and show the effect of finite mean free path, finite lifetime, and non-contiguous pixels upon it. A simple method for modeling and evaluating MTF from edge spread is developed and used. A model that separately measures system and device noise in multichannel systems is developed, and shown to agree with measurements taken with the same device in both a quiet and a somewhat noisy system. Hardware and software systems that operate these devices are also surveyed, and \u27agile\u27 technologies and development methodologies apprate for detector research are employed to build a simple and flexible array control system, primarily from open-source components. The system is used to collect much of the experimental data
    • 

    corecore