1,481 research outputs found
Business goals, user needs, and requirements: A problem frame-based view
Background: It is well known that the analysis of requirements involves several stakeholders and perspectives. Very often several points of view at different abstraction levels have to be taken into account: all these features make requirements analysis a complex task. Such intrinsic complexity makes it difficult to understand several of the basic concepts that underlie requirements engineering. Actually, there is some confusion \u2013especially in industry\u2013 about what really a user requirement is, what are the differences between user requirements and user needs, and what are their relationships with business processes.
Objective: The paper aims at clarifying the aforementioned issues, by providing a systematic and clear method for establishing requirements hierarchies.
Method: The problem of describing requirements hierarchies is tackled using the problem frames concepts and notation. A case study is used throughout the paper to illustrate the proposed approach.
Results: The description of requirements at different levels of abstractions and requirements hierarchies are illustrated. The resulting models are coherent with the reference model for requirements specifications and the problem frames. An analysis process that is aware of the differences between user needs and requirements is also provided, to illustrate the process of refining high-level goals into requirements that can be satisfied by a hardware/software machine.
Conclusions: The proposed method appears promising to model, study and evaluate the relationships between business processes and the strategies for achieving business goals based on the usage of information technology
Using Functional Complexity Measures in Software Development Effort Estimation
Several definitions of measures that aim at representing the size of software requirements are currently available. These measures have gained a quite relevant role, since they are one of the few types of objective measures upon which effort estimation can be based. However, traditional Functional Size Measures do not take into account the amount and complexity of elaboration required, concentrating instead on the amount of data accessed or moved. This is a problem since the amount and complexity of the required data elaboration affect the implementation effort, but are not adequately represented by the current size measures, including the standardized ones. Recently, a few approaches to measuring aspects of user requirements that are supposed to be related with functional complexity and/or data elaboration have been proposed by researchers. In this paper, we take into consideration some of these proposed measures and compare them with respect to their ability to predict the development effort, especially when used in combination with measures of functional size. A few methods for estimating software development effort \u2013both based on model building and on analogy\u2013 are experimented with, using different types of functional size and elaboration complexity measures. All the most significant models obtained were based on a notion of computation density that is based on the number of computation flows in functional processes. When using estimation by analogy, considering functional complexity in the selection of analogue projects improved accuracy in all the evaluated cases. In conclusion, it appears that functional complexity is a factor that affects development effort; accordingly, whatever method is used for effort estimation, it is advisable to take functional complexity into due consideration
An Empirical Evaluation of Simplified Function Point Measurement Processes
Function Point Analysis is widely used, especially to quantify the size of applications in the early stages of development, when effort estimates are needed. However, the measurement process is often too long or too expensive, or it
requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified methods have been proposed to measure Function Points. We used simplified methods for sizing both \u201ctraditional\u201d and Real-Time applications, with the aim of evaluating the accuracy of the sizing with respect to full-fledged Function PointAnalysis. To this end, a set of projects, which had already been measured by means of Function Point Analysis, have been measured using a few simplified processes, including those proposed by NESMA, the Early&Quick Function Points, the ISBSG average weights, and others; the resulting size measures were then compared. We also derived simplified size models by analyzing the dataset used for experimentations. In general, all the methods that provide predefined weights for all the transaction and data types identified in Function Point Analysis provided similar results, characterized by acceptable accuracy. On the contrary, methods that rely on just one of the elements that contribute to size tend to be quite inaccurate. In general, different methods show different accuracy for Real-Time and non Real-Time applications. The results of the analysis reported here show that in general it is possible to size software via simplified measurement processes with an acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the details of both data and operations. Deriving our own models from the project datasets proved possible, and yielded results that are similar to those obtained via the methods proposed in the literature
Measuring the Functional Size of Real-Time and Embedded Software: a Comparison of Function Point Analysis and COSMIC
The most widely used methods and tools for estimating the cost of software development require that the functional size of the program to be developed be measured, either in \u201ctraditional\u201d Function Points or in COSMIC Function Points. The latter were proposed to solve some shortcomings of the former, including not being well suited for representing the functionality of real-time and embedded software. However, little evidence exists to support the claim that COSMIC Function Points are better suited than traditional Function Points for the measurement of real-time and embedded applications. Our goal is to compare how well the two methods can be used in functional measurement of real-time and embedded systems. We applied both measurement methods to a number of situations that occur quite often in real-time and embedded software. Our results seem to indicate that, overall, COSMIC Function Points are better suited than traditional Function Points for measuring characteristic features of real-time and embedded systems. Our results also provide practitioners with useful indications about the pros and cons of functional size measurement methods when confronted with specific features of real-time and embedded software
Towards a simplified definition of Function Points
3Background. COSMIC Function Points and traditional Function Points (i.e., IFPUG Function points and more recent variation of Function Points, such as NESMA and FISMA) are probably the best known and most widely used Functional Size Measurement methods. The relationship between the two kinds of Function Points still needs to be investigated. If traditional Function Points could be accurately converted into COSMIC Function Points and vice versa, then, by measuring one kind of Function Points, one would be able to obtain the other kind of Function Points, and one might measure one or the other kind interchangeably. Several studies have been performed to evaluate whether a correlation or a conversion function between the two measures exists. Specifically, it has been suggested that the relationship between traditional Function Points and COSMIC Function Points may not be linear, i.e., the value of COSMIC Function Points seems to increase more than proportionally to an increase of traditional Function Points.
Objective. This paper aims at verifying this hypothesis using available datasets that collect both FP and CFP size measures.
Method. Rigorous statistical analysis techniques are used, specifically Piecewise Linear Regression, whose applicability conditions are systematically checked. The Piecewise Linear Regression curve is a series of interconnected segments. In this paper, we focused on Piecewise Linear Regression curves composed of two segments. We also used Linear and Parabolic Regressions, to check if and to what extent Piecewise Linear Regression may provide an advantage over other regression techniques. We used two categories of regression techniques: Ordinary Least Squares regression is based on the usual minimization of the sum of squares of the residuals, or, equivalently, on the minimization of the average squared residual; Least Median of Squares regression is a robust regression technique that is based on the minimization of the median squared residual. Using a robust regression technique helps filter out the excessive influence of outliers.
Results. It appears that the analysis of the relationship between traditional Function Points and COSMIC Function Points based on the aforementioned data analysis techniques yields valid significant models. However, different results for the various available datasets are achieved. In practice, we obtained statistically valid linear, piecewise linear, and non-linear conversion formulas for several datasets. In general, none of these is better than the others in a statistically significant manner.
Conclusions. Practitioners interested in the conversion of FP measures into CFP (or vice versa) cannot just pick a conversion model and be sure that it will yield the best results. All the regression models we tested provide good results with some datasets. In practice, all the models described in the paper –in particular, both linear and non-linear ones– should be evaluated in order to identify the ones that are best suited for the specific dataset at hand.openLavazza, L.; Morasca, S.; Robiolo, G.Lavazza, LUIGI ANTONIO; Morasca, Sandro; Robiolo, G
An Empirical Evaluation of Effort Prediction Models Based on Functional Size Measures
Software development effort estimation is among the most interesting issues for project managers, since reliable estimates are at the base of good planning and project control. Several different techniques have been proposed for effort estimation, and practitioners need evidence, based on which they can choose accurate estimation methods.
The work reported here aims at evaluating the accuracy of software development effort estimates that can be obtained via popular techniques, such as those using regression models and those based on analogy.
The functional size and the development effort of twenty software development projects were measured, and the resulting dataset was used to derive effort estimation models and evaluate their accuracy.
Our data analysis shows that estimation based on the closest analogues provides better results for most models, but very bad estimates in a few cases. To mitigate this behavior, the correction of regression toward the mean proved effective.
According to the results of our analysis, it is advisable that regression to the mean correction is used when the estimates are based on closest analogues. Once corrected, the accuracy of analogy-based estimation is not substantially different from the accuracy of regression based models
A Report on Using Simplified Function Point Measurement Processes
Background: Function Point Analysis is widely used, especially to quantify the size of applications in the early stages of development, when effort estimates are needed. However, the measurement process is often too long or too expensive or requires more knowledge than available when development effort estimates are due. To overcome these problems, simplified methods have been proposed to measure Function Points. Objectives: The work reported here concerns the experimentation of simplified functional size measurement methods in the sizing of both \u201ctraditional\u201d and real-time applications. The goal is to evaluate the accuracy of the sizing with respect to full-fledged Function Point Analysis. Method: A set of projects, which had already been measured by means of Function Point Analysis, have been measured using the NESMA and Early&Quick Function Points simplified processes: the resulting size measures were compared. Results: while NESMA indicative method appears to quite overestimate the size of the considered applications, the other methods provide much more accurate estimates of functional size. EQFP methods proved more accurate in estimating the size of non Real-Time applications, while the NESMA estimated method proved fairly good in estimating both Real-Time and non Real-Time applications. Conclusions: The results of the experiment reported here show that in general it is possible to size software via simplified measurement processes with an acceptable accuracy. In particular, the simplification of the measurement process allows the measurer to skip the function weighting phases, which are usually expensive, since they require a thorough analysis of the internals of both data and operations
Diverse reductive dehalogenases are associated with Clostridiales-enriched microcosms dechlorinating 1,2-dichloroethane
The achievement of successful biostimulation of active microbiomes for the cleanup of a polluted site is strictly dependent on the knowledge of the key microorganisms equipped with the relevant catabolic genes responsible for the degradation process. In this work, we present the characterization of the bacterial community developed in anaerobic microcosms after biostimulation with the electron donor lactate of groundwater polluted with 1,2-dichloroethane (1,2-DCA). Through a multilevel analysis, we have assessed (i) the structural analysis of the bacterial community; (ii) the identification of putative dehalorespiring bacteria; (iii) the characterization of functional genes encoding for putative 1,2-DCA reductive dehalogenases (RDs). Following the biostimulation treatment, the structure of the bacterial community underwent a notable change of the main phylotypes, with the enrichment of representatives of the order Clostridiales. Through PCR targeting conserved regions within known RD genes, four novel variants of RDs previously associated with the reductive dechlorination of 1,2-DCA were identified in the metagenome of the Clostridiales-dominated bacterial community
Field and experimental data indicate that the eastern cottontail (Sylvilagus floridanus) is susceptible to infection with European brown hare syndrome (EBHS) virus and not with rabbit haemorrhagic disease (RHD) virus
This is an Open Access article distributed under the terms of the Creative Commons Attribution License.-- et al.The eastern cottontail (Sylvilagus floridanus) is an American lagomorph. In 1966, it was introduced to Italy, where it is currently widespread. Its ecological niche is similar to those of native rabbits and hares and increasing overlap in distribution brings these species into ever closer contact. Therefore, cottontails are at risk of infection with the two lagoviruses endemically present in Italy: Rabbit Haemorrhagic Disease virus (RHDV) and European Brown Hare Syndrome Virus (EBHSV). To verify the susceptibility of Sylvilagus to these viruses, we analyzed 471 sera and 108 individuals from cottontail populations in 9 provinces of north-central Italy from 1999 to 2012. In total, 15-20% of the cottontails tested seropositive for EBHSV; most titres were low, but some were as high as 1/1280. All the cottontails virologically tested for RHDV and EBHSV were negative with the exception of one individual found dead with hares during a natural EBHS outbreak in December 2009. The cottontail and the hares showed typical EBHS lesions, and the EBHSV strain identified was the same in both species (99.9% identity). To experimentally confirm the diagnosis, we performed two trials in which we infected cottontails with both EBHSV and RHDV. One out of four cottontails infected with EBHSV died of an EBHS-like disease, and the three surviving animals developed high EBHSV antibody titres. In contrast, neither mortality nor seroconversion was detected after infection with RHDV. Taken together, these results suggest that Sylvilagus is susceptible to EBHSV infection, which occasionally evolves to EBHS-like disease; the eastern cottontail could therefore be considered a >spill over> or > dead end > host for EBHSV unless further evidence is found to confirm that it plays an active role in the epidemiology of EBHSV.Peer Reviewe
- …
