5 research outputs found

    A COMPARISON OF ALBRECHT\u27S FUNCTION POINT AND SYMONS\u27 MARK II METRICS

    Get PDF
    Software system size provides a basis for software cost estimation and management during software development The most widely used product size metric for the specification level is Albrecht\u27s Function Point Analysis (FPA). Symons has suggested an alternative for this metric called the Mark lI metric. This metric is simpler, more easily scalable, and better takes into account the complexity of internal processing. Moreover, it suggests different size values in cases where the measured systems differ in terms of system interfaces. One problem in using these metrics has been that there are no tools that can be used to calculate them during the specification phase. To alleviate this we demonstrate how these metrics can be automatically calculated from Structured Analysis descriptions. Another problem has been that there are no reliable comparisons of these metrics based on sufficient statistical samples of system size measures. In this paper we address this problem by carrying out preliminary comparisons of these metrics. The analysis is based on a randomly generated statistical sample of dataflow diagrams. These diagrams are automatically analyzed using our prototype measurement system using both FPA and the Mark II metric. The statistical analysis of the results shows that Mark II correlates reasonably well with Function Points if some adjustments are done to the Mark II metric. In line with Symons\u27s discussion our analysis points out that the size of correlation depends on the measured system type. Our results also show that we can derive useful size metrics for higher level specifications and that these metrics can be easily automated in CASE tools. Because the obtained results are based on simulation, in the future they must be corroborated with real life industrial data

    An Investigation Into an Effective Method of Automatically Analysing Oracle Applications to Count Function Points

    Get PDF
    Function Point Analysis (FPA) is a synthetic software estimation metric used for computing the size and complexity of applications. It was first introduced by Allan. J. Albrecht during the mid-seventies, as a result of a lengthy research based on applications that were developed using COBOL and PL/1 programming languages. The purpose of this research· is to investigate the possibility, and the most effective method, of automatically performing a Function Point Analysis on Oracle applications that consist of Oracle Forms and Oracle Reports. The research revealed a seemingly lack of other researches on this topic. As FPA was invented a few years prior to the birth of Oracle, and consequently that of fourth-generation languages, it had to be tailored to suit the fourth-generation language Oracle tools used to develop the Oracle applications. This experiment provided a proof of concept and resulted in a software that achieved its objective of automatically calculating Oracle applications, consisting of Oracle Forms and Oracle Reports, in an a posteriori manner

    An Empirical investigation into metrics for object-oriented software

    Get PDF
    Object-Oriented methods have increased in popularity over the last decade, and are now the norm for software development in many application areas. Many claims were made for the superiority of object-oriented methods over more traditional methods, and these claims have largely been accepted, or at least not questioned by the software community. Such was the motivation for this thesis. One way of capturing information about software is the use of software metrics. However, if we are to have faith in the information, we must be satisfied that these metrics do indeed tell us what we need to know. This is not easy when the software characteristics we are interested in are intangible and unable to be precisely defined. This thesis considers the attempts to measure software and to make predictions regarding maintainabilty and effort over the last three decades. It examines traditional software metrics and considers their failings in the light of the calls for better standards of validation in terms of measurement theory and empirical study. From this five lessons were derived. The relatively new area of metrics for object-oriented systems is examined to determine whether suggestions for improvement have been widely heeded. The thesis uses an industrial case study and an experiment to examine one feature of objectorientation, inheritance, and its effect on aspects of maintainability, namely number of defects and time to implement a change. The case study is also used to demonstrate that it is possible to obtain early, simple and useful local prediction systems for important attributes such as system size and defects, using readily available measures rather than attempting predefined and possibly time consuming metrics which may suffer from poor definition, invalidity or inability to predict or capture anything of real use. The thesis concludes that there is empirical evidence to suggest a hypothesis linking inheritance and increased incidence of defects and increased maintenance effort and that more empirical studies are needed in order to test the hypothesis. This suggests that we should treat claims regarding the benefits of object-orientation for maintenance with some caution. This thesis also concludes that with the ability to produce, with little effort, accurate local metrics, we have an acceptable substitute for the large predefined metrics suites with their attendant problems

    Formal and quantitative approach to non-functional requirements modeling and assessment in software engineering

    Get PDF
    In the software market place, in which functionally equivalent products compete for the same customer, Non Functional Requirements (NFRs) become more important in distinguishing between the competing products. However, in practice, NFRs receive little attention relative to Functional Requirements (FRs). This is mainly because of the nature of these requirements which poses a challenge when taking the choice of treating them earlier in the software development. NFRs are subjective, relative and they become scattered among multiple modules when they are mapped from the requirements domain to the solution space. Furthermore, NFRs can often interact, in the sense that attempts to achieve one NFR can help or hinder the achievement of other NFRs at particular software functionality. Such an interaction creates an extensive network of interdependencies and tradeoffs among NFRs which is not easy to trace or estimate. This thesis contributes towards achieving the goal of managing the attainable scope and the changes of NFRs. The thesis proposes and empirically evaluates a formal and quantitative approach to modeling and assessing NFRs. Central to such an approach is the implementation of the proposed NFRs Ontology for capturing and structuring the knowledge on the software requirements (FRs and NFRs), their refinements, and their interdependencies. In this thesis, we also propose a change management mechanism for tracing the impact of NFRs on the other constructs in the ontology and vice-versa. We provide a traceability mechanism using Datalog expressions to implement queries on the relational model-based representation for the ontology. An alternative implementation view using XML and XQuery is provided as well. In addition, we propose a novel approach for the early requirements-based effort estimation, based on NFRs Ontology. The effort estimation approach complementarily uses one standard functional size measurement model, namely COSMIC, and a linear regression techniqu

    Управління інноваціями на етапах життєвого циклу програмного забезпечення

    Get PDF
    Починаючи з другої половини XX-го сторіччя спостеріга-ється інтенсивний розвиток НТП, який значним чином був викликаний виникненням і широким розповсюдженням ЕОМ. Апаратне забезпечення ЕОМ вдосконалювалося до нинішнього часу і продовжує вдосконалюватися високими темпами, зокрема, продуктивність ЕОМ подвоюється приблизно кожні півтора-два роки. Щодо проду-ктивності розробки і показників ефективності програмного забезпечення (ПЗ), то їх ріст відбувається суттєво повільнішими темпами, ніж ріст показників апаратного за-безпечення. Незважаючи на значний прогрес в сфері створення ПЗ, воно на даний час є і, ймовірно, залишиться у найближчому майбутньому результатом інтелектуа-льної праці людини, а тому в значній мірі залежить від здатності людей у обмеже-ний строк створювати якісне ПЗ, яка розвивається порівняно повільними темпами. Згідно з різними статистичними оцінками, не більше третини проектів з розробки ПЗ можна вважати повністю успішними, інші закінчуються повним провалом, чи суттєво виходять за рамки встановлених бюджетних і часових обмежень. При цитуванні документа, використовуйте посилання http://essuir.sumdu.edu.ua/handle/123456789/1589
    corecore