2,094 research outputs found

    Approximate COSMIC functional size - guideline for approximate COSMIC functional size measurement

    Get PDF
    The COSMIC method provides a standardized way of measuring the functional size of software from the functional domains commonly referred to as 'business application' or 'Management Information Systems' (MIS) and 'real-time' software, and hybrids of these. In practice it is often sufficient to measure a functional size approximately. Typical situations where such a need arises are early in the life of a project, before the functional user requirements ('FUR') have been specified down to the level of detail where the precise size measurement is possible or when a measurement is needed, but there is insufficient time or no need to measure the required size using the standard method. The guideline describes the current state of the art with regard to approximate COSMIC functional size measurement. All proposed COSMIC approximation methods rely on determining some average of the size(s) and/or number(s) of functional processes. The fact that the size of a single functional process has no upper finite limit is probably the reason why multiple COSMIC approximation methods have been developed for different types of software. Therefore the guideline describes a number of approximation methods with their pros and cons, their recommended area of application and their validity, rather than document a single COSMIC approximation method

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods \u2013like the IFPUG Function Point Analysis and COSMIC methods\u2013 are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications\u2018 sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results

    Improve software defect estimation with six sigma defect measures : empirical studies imputation techniques on ISBSG data repository with a high ratio of missing data

    Get PDF
    This research analysis work reports on a set of empirical studies tackling the research issues of improving software defect estimation models with Sigma defect measures (e.g., Sigma levels) using the ISBSG data repository with a high ratio of missing data. Three imputation techniques that were selected for this research work: single imputation, regression imputation, and stochastic regression imputation. These imputation techniques were used to impute the missing data within the variable ‘Total Number of Defects’, and were first compared with each other using common verification criteria. A further verification strategy was developed to compare and assess the performance of the selected imputation techniques through verifying the predictive accuracy of the obtained software defect estimation models form the imputed datasets. A Sigma-based classification was carried out on the imputed dataset of the better performance imputation technique on software defect estimation. This classification was used to determine at which levels of Sigma; the software projects can be best used to build software defect estimation models: which has resulted in Sigma-based datasets with Sigma ranging (e.g., dataset of software projects with a range from 3 Sigma to 4 Sigma). Finally, software defect estimation models were built on the Sigma-based datasets

    Development of a scaling factors framework to improve the approximation of software functional size with COSMIC - ISO 19761

    Get PDF
    De nombreuses organisations de développement de logiciels s’efforcent de fournir des produits de haute qualité tout en gardant un équilibre entre la satisfaction du client, le calendrier et le budget. L'estimation de l'effort de développement des projets logiciel est l'un des défis majeurs de ces organisations de développement et ce défi est généralement rencontré dès les premières phases du cycle de vie du développement. Pour relever ce défi, les organisations de développement de logiciels utilisent des techniques d'estimation précoce pour obtenir des estimations de l'effort au début (c.-à-d. estimations a priori) afin d'aider les gestionnaires de projet et les responsables techniques dans la planification et la gestion des projets. L'une des approches pour l’estimation de l'effort a priori est basée sur l'approximation des fonctions attendues du logiciel. Ceci nécessite l'utilisation d'une méthode de mesure pour quantifier ces fonctions: la littérature réfère à la mesure de la taille fonctionnelle des produits logiciels - incluant les applications d'entreprise. Différentes normes internationales ont été adoptées pour mesurer la taille fonctionnelle des logiciels, telle que ISO 19761: COSMIC. Cependant, durant les premières phases du cycle de vie du développement logiciel, et plus spécifiquement dans le processus d’estimation de la taille fonctionnelle du logiciel, l'absence de spécifications complètes et détaillées des exigences logicielles est commune, ce qui entraîne de nombreux défis. Par exemple: le niveau de granularité (c.-à-d. le niveau de détail) de la spécification des exigences fonctionnelles du logiciel est identifié subjectivement en utilisant l'intuition, l'expérience et/ou les opinions des experts du domaine; les facteurs d'échelle ne sont pas attribués; il n’y a pas une notation standardisée pour définir un ensemble standard de facteurs d'échelle que les ingénieurs des exigences peuvent affecter aux spécifications des exigences fonctionnelles des nouveaux projets de développement de logiciels afin d'identifier leur niveau de granularité. Ces défis affectent l’estimation de la taille fonctionnelle de nouveaux projets de développement de logiciels puisque le résultat de l’estimation de la taille fonctionnelle est l'une des entrées principales du processus d'estimation d'effort. Ces défis empêchent les gestionnaires des projets logiciels de construire des modèles réalistes d'estimation de l'effort pour les nouveaux projets de développement de logiciels. La motivation de ce projet de recherche est d'aider les organisations du développement logiciels et, en particulier, les gestionnaires des projets et les responsables techniques pour construire des modèles d'estimation de l’effort plus précis et ce en améliorant l'une des entrées du processus d'estimation de l'effort, afin d'améliorer la planification, la gestion et le développement des logiciels à des phases précoces du cycle de vie du développement des logiciels. Le but de ce projet de recherche est d'améliorer l'une des entrées du processus d'estimation de l'effort et en particulier la qualité de l’approximation de la taille fonctionnelle des nouveaux projets du développement des logiciels. L'objectif principal de la recherche est de concevoir un cadre de référence à être utilisé par les ingénieurs des exigences pour attribuer des facteurs d'échelle pour les premières versions de la spécification des exigences fonctionnelles du logiciel afin d’identifier leur niveau de granularité, ce qui se déroule généralement après l'étape de l'étude de faisabilité pour les nouveaux projets du développement logiciels. Pour atteindre cet objectif de recherche, les principales phases de la méthodologie de recherche sont: • la phase de recherche exploratoire: pour d'étudier l'impact du problème de recherche sur l'approximation de la taille fonctionnelle; • la phase de conception du cadre de référence: pour concevoir la cadre de référence qui attribue les facteurs d'échelle à des spécifications fonctionnelles des exigences fonctionnelles pour identifier leurs niveaux de granularité; et • la phase de vérification du cadre de référence: c’est la phase qui vérifie la convivialité du cadre de référence grâce aux différents groupes de participants ayant des profils d'expérience différents, et qui vérifie l'applicabilité de cadre de référence avec une variété d'études de cas représentant des systèmes logiciels différents. Le principal résultat de ce projet de recherche est un cadre de référence qui se compose: • d'un méta-modèle qui identifie les concepts et leurs relations qui doivent être recueillies par les ingénieurs des exigences pour atteindre la pleine spécification fonctionnelle des spécifications des exigences logicielles; et • les critères qui permettent d'identifier le niveau de granularité de la spécification des exigences logicielles, et de leur attribuer des facteurs d'échelle pour classer leurs niveaux de granularité. Le cadre de référence a été vérifié pour utilisation avec la même étude de cas par trois groupes de participants de l'industrie du génie logiciel, tandis que son applicabilité a été vérifiée avec quatre études de cas

    Towards making functional size measurement easily usable in practice

    Get PDF
    Functional Size Measurement methods –like the IFPUG Function Point Analysis and COSMIC methods– are widely used to quantify the size of applications. However, the measurement process is often too long or too expensive, or it requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified measurement methods have been proposed. This research explores easily usable functional size measurement method, aiming to improve efficiency, reduce difficulty and cost, and make functional size measurement widely adopted in practice. The first stage of the research involved the study of functional size measurement methods (in particular Function Point Analysis and COSMIC), simplified methods, and measurement based on measurement-oriented models. Then, we modeled a set of applications in a measurement-oriented way, and obtained UML models suitable for functional size measurement. From these UML models we derived both functional size measures and object-oriented measures. Using these measures it was possible to: 1) Evaluate existing simplified functional size measurement methods and derive our own simplified model. 2) Explore whether simplified method can be used in various stages of modeling and evaluate their accuracy. 3) Analyze the relationship between functional size measures and object oriented measures. In addition, the conversion between FPA and COSMIC was studied as an alternative simplified functional size measurement process. Our research revealed that: 1) In general it is possible to size software via simplified measurement processes with acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. The models obtained from out dataset yielded results that are similar to those reported in the literature. All simplified measurement methods that use predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to functional size tend to be quite inaccurate. In general, different methods showed different accuracy for Real-Time and non Real-Time applications. 2) It is possible to write progressively more detailed and complete UML models of user requirements that provide the data required by the simplified COSMIC methods. These models yield progressively more accurate measures of the modeled software. Initial measures are based on simple models and are obtained quickly and with little effort. As V models grow in completeness and detail, the measures increase their accuracy. Developers that use UML for requirements modeling can obtain early estimates of the applications‘ sizes at the beginning of the development process, when only very simple UML models have been built for the applications, and can obtain increasingly more accurate size estimates while the knowledge of the products increases and UML models are refined accordingly. 3) Both Function Point Analysis and COSMIC functional size measures appear correlated to object-oriented measures. In particular, associations with basic object- oriented measures were found: Function Points appear associated with the number of classes, the number of attributes and the number of methods; CFP appear associated with the number of attributes. This result suggests that even a very basic UML model, like a class diagram, can support size measures that appear equivalent to functional size measures (which are much harder to obtain). Actually, object-oriented measures can be obtained automatically from models, thus dramatically decreasing the measurement effort, in comparison with functional size measurement. In addition, we proposed conversion method between Function Points and COSMIC based on analytical criteria. Our research has expanded the knowledge on how to simplify the methods for measuring the functional size of the software, i.e., the measure of functional user requirements. Basides providing information immediately usable by developers, the researchalso presents examples of analysis that can be replicated by other researchers, to increase the reliability and generality of the results

    Universal upper limit on inflation energy scale from cosmic magnetic field

    Full text link
    Recently observational lower bounds on the strength of cosmic magnetic fields were reported, based on gamma-ray flux from distant blazars. If inflation is responsible for the generation of such magnetic fields then the inflation energy scale is bounded from above as rho_{inf}^{1/4} < 2.5 times 10^{-7}M_{Pl} times (B_{obs}/10^{-15}G)^{-2} in a wide class of inflationary magnetogenesis models, where B_{obs} is the observed strength of cosmic magnetic fields. The tensor-to-scalar ratio is correspondingly constrained as r< 10^{-19} times (B_{obs}/10^{-15}G)^{-8}. Therefore, if the reported strength B_{obs} \geq 10^{-15}G is confirmed and if any signatures of gravitational waves from inflation are detected in the near future, then our result indicates some tensions between inflationary magnetogenesis and observations.Comment: 12pages, v2: several discussions and references added, version accepted for publication by JCA

    Joint analysis constraints on the physics of the first galaxies with low frequency radio astronomy data

    Full text link
    Observations of the first billion years of cosmic history are currently limited. We demonstrate, using a novel machine learning technique, the synergy between observations of the sky-averaged 21-cm signal from neutral hydrogen and interferometric measurements of the corresponding spatial fluctuations. By jointly analysing data from SARAS3 (redshift z≈15−25z\approx15-25) and limits from HERA (z≈8z\approx8 and 1010), we show that such a synergetic analysis provides tighter constraints on the astrophysics of galaxies 200 million years after the Big Bang than can be achieved with the individual data sets. Although our constraints are weak, this is the first time data from a sky-averaged 21-cm experiment and power spectrum experiment have been analysed together. In synergy, the two experiments leave only 64.9−0.1+0.364.9^{+0.3}_{-0.1} % of the explored broad theoretical parameter space to be consistent with the joint data set, in comparison to 92.3−0.1+0.392.3^{+0.3}_{-0.1} % for SARAS3 and 79.0−0.2+0.579.0^{+0.5}_{-0.2} % for HERA alone. We use the joint analysis to constrain star formation efficiency, minimum halo mass for star formation, X-ray luminosity of early emitters and the radio luminosity of early galaxies. The joint analysis disfavours at 68 % confidence a combination of galaxies with X-ray emission that is ≲33\lesssim 33 and radio emission that is ≳32\gtrsim 32 times as efficient as present day galaxies. We disfavour at 95 % confidence scenarios in which power spectra are ≥126\geq126 mK2^{2} at z=25z=25 and the sky-averaged signals are ≤−277\leq-277 mK.Comment: Submitte

    Thermal Control System to Easily Cool the GAPS Balloon-borne Instrument on the Ground

    Full text link
    This study developed a novel thermal control system to cool detectors of the General AntiParticle Spectrometer (GAPS) before its flights. GAPS is a balloon-borne cosmic-ray observation experiment. In its payload, GAPS contains over 1000 silicon detectors that must be cooled below -40^{\circ}\mbox{C}. All detectors are thermally coupled to a unique heat-pipe system (HPS) that transfers heat from the detectors to a radiator. The radiator is designed to be cooled below -50^{\circ}\mbox{C} during the flight by exposure to space. The pre-flight state of the detectors is checked on the ground at 1 atm and ambient room temperature, but the radiator cannot be similarly cooled. The authors have developed a ground cooling system (GCS) to chill the detectors for ground testing. The GCS consists of a cold plate, a chiller, and insulating foam. The cold plate is designed to be attached to the radiator and cooled by a coolant pumped by the chiller. The payload configuration, including the HPS, can be the same as that of the flight. The GCS design was validated by thermal tests using a scale model. The GCS design is simple and provides a practical guideline, including a simple estimation of appropriate thermal insulation thickness, which can be easily adapted to other applications.Comment: 8 pages, 14 figures, 3 table
    • …
    corecore