179 research outputs found

    Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach

    Full text link
    Historically, software production methods and tools have a unique goal: to produce high quality software. Since the goal of Model-Driven Development (MDD) methods is no different, MDD methods have emerged to take advantage of the benefits of using conceptual models to produce high quality software. In such MDD contexts, conceptual models are used as input to automatically generate final applications. Thus, we advocate that there is a relation between the quality of the final software product and the quality of the models used to generate it. The quality of conceptual models can be influenced by many factors. In this thesis, we focus on the accuracy of the techniques used to predict the characteristics of the development process and the generated products. In terms of the prediction techniques for software development processes, it is widely accepted that knowing the functional size of applications in order to successfully apply effort models and budget models is essential. In order to evaluate the quality of generated applications, defect detection is considered to be the most suitable technique. The research goal of this thesis is to provide an accurate measurement procedure based on COSMIC for the automatic sizing of object-oriented OO-Method MDD applications. To achieve this research goal, it is necessary to accurately measure the conceptual models used in the generation of object-oriented applications. It is also very important for these models not to have defects so that the applications to be measured are correctly represented. In this thesis, we present the OOmCFP (OO-Method COSMIC Function Points) measurement procedure. This procedure makes a twofold contribution: the accurate measurement of objectoriented applications generated in MDD environments from the conceptual models involved, and the verification of conceptual models to allow the complete generation of correct final applications from the conceptual models involved. The OOmCFP procedure has been systematically designed, applied, and automated. This measurement procedure has been validated to conform to the ISO 14143 standard, the metrology concepts defined in the ISO VIM, and the accuracy of the measurements obtained according to ISO 5725. This procedure has also been validated by performing empirical studies. The results of the empirical studies demonstrate that OOmCFP can obtain accurate measures of the functional size of applications generated in MDD environments from the corresponding conceptual models.Marín Campusano, BM. (2011). Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11237Palanci

    Towards a simplified definition of Function Points

    Get PDF
    3Background. COSMIC Function Points and traditional Function Points (i.e., IFPUG Function points and more recent variation of Function Points, such as NESMA and FISMA) are probably the best known and most widely used Functional Size Measurement methods. The relationship between the two kinds of Function Points still needs to be investigated. If traditional Function Points could be accurately converted into COSMIC Function Points and vice versa, then, by measuring one kind of Function Points, one would be able to obtain the other kind of Function Points, and one might measure one or the other kind interchangeably. Several studies have been performed to evaluate whether a correlation or a conversion function between the two measures exists. Specifically, it has been suggested that the relationship between traditional Function Points and COSMIC Function Points may not be linear, i.e., the value of COSMIC Function Points seems to increase more than proportionally to an increase of traditional Function Points. Objective. This paper aims at verifying this hypothesis using available datasets that collect both FP and CFP size measures. Method. Rigorous statistical analysis techniques are used, specifically Piecewise Linear Regression, whose applicability conditions are systematically checked. The Piecewise Linear Regression curve is a series of interconnected segments. In this paper, we focused on Piecewise Linear Regression curves composed of two segments. We also used Linear and Parabolic Regressions, to check if and to what extent Piecewise Linear Regression may provide an advantage over other regression techniques. We used two categories of regression techniques: Ordinary Least Squares regression is based on the usual minimization of the sum of squares of the residuals, or, equivalently, on the minimization of the average squared residual; Least Median of Squares regression is a robust regression technique that is based on the minimization of the median squared residual. Using a robust regression technique helps filter out the excessive influence of outliers. Results. It appears that the analysis of the relationship between traditional Function Points and COSMIC Function Points based on the aforementioned data analysis techniques yields valid significant models. However, different results for the various available datasets are achieved. In practice, we obtained statistically valid linear, piecewise linear, and non-linear conversion formulas for several datasets. In general, none of these is better than the others in a statistically significant manner. Conclusions. Practitioners interested in the conversion of FP measures into CFP (or vice versa) cannot just pick a conversion model and be sure that it will yield the best results. All the regression models we tested provide good results with some datasets. In practice, all the models described in the paper –in particular, both linear and non-linear ones– should be evaluated in order to identify the ones that are best suited for the specific dataset at hand.openLavazza, L.; Morasca, S.; Robiolo, G.Lavazza, LUIGI ANTONIO; Morasca, Sandro; Robiolo, G

    An Update on Effort Estimation in Agile Software Development: A Systematic Literature Review

    Full text link
    [EN] Software developers require effective effort estimation models to facilitate project planning. Although Usman et al. systematically reviewed and synthesized the effort estimation models and practices for Agile Software Development (ASD) in 2014, new evidence may provide new perspectives for researchers and practitioners. This article presents a systematic literature review that updates the Usman et al. study from 2014 to 2020 by analyzing the data extracted from 73 new papers. This analysis allowed us to identify six agile methods: Scrum, Xtreme Programming and four others, in all of which expert-based estimation methods continue to play an important role. This is particularly the case of Planning Poker, which is very closely related to the most frequently used size metric (story points) and the way in which software requirements are specified in ASD. There is also a remarkable trend toward studying techniques based on the intensive use of data. In this respect, although most of the data originate from single-company datasets, there is a significant increase in the use of cross-company data. With regard to cost factors, we applied the thematic analysis method. The use of team and project factors appears to be more frequent than the consideration of more technical factors, in accordance with agile principles. Finally, although accuracy is still a challenge, we identified that improvements have been made. On the one hand, an increasing number of papers showed acceptable accuracy values, although many continued to report inadequate results. On the other, almost 29% of the papers that reported the accuracy metric used reflected aspects concerning the validation of the models and 18% reported the effect size when comparing models.This work was supported by the Spanish Ministry of Science, Innovation and Universities through the Adapt@Cloud Project under Grant TIN2017-84550-R.Fernández-Diego, M.; Méndez, ER.; González-Ladrón-De-Guevara, F.; Abrahao Gonzales, SM.; Insfran, E. (2020). An Update on Effort Estimation in Agile Software Development: A Systematic Literature Review. IEEE Access. 8:166768-166800. https://doi.org/10.1109/ACCESS.2020.3021664S166768166800

    Representation of business processes at multiple levels of abstraction (strategic, tactical and operational) during the requirements elicitation stage of a software project, and the measurement of their functional size with ISO 19761

    Get PDF
    This thesis aims at helping software engineers and business analysts to better model business processes when those models are meant to be used: for software requirements specification, and for functional size measurement purposes. The research goal of this thesis is to contribute to the representation of business processes for its use during the requirements elicitation stage of a software project. To achieve this goal, two research objectives are clearly defined: 1. To propose a novel modeling approach that generates business process models intended to be used in a software requirements elicitation activity. The modeling approach should not significantly increase the complexity of the modeling notations used to represent the business processes; and it must allow the active participation of the various stakeholders involved in a typical software project in order to represent, in a consistent and structured way, their needs and constraints. 2. To develop a procedure to measure the functional size of a software application from the business process models representing it. This measurement procedure should be compatible with the COSMIC ISO 19761 standard; and it should be able to be used independently of the modeling notation used to represent the business process. To achieve the first objective, this thesis proposes a novel modeling approach (coined BPM+) that models business processes at three levels of abstraction: strategic, tactical and operational. An a priori version of BPM+ was designed based on the findings of the literature review. This a priori version was iteratively refined through a pilot case study in industry, a series of ontological analyses, and a survey of experts. As a result, a reviewed version of BPM+ was proposed. The reviewed version was evaluated through a second case study in industry. Therefore, the design of BPM+ has been based on a triangulation of evidences obtained from various sources. To achieve the second objective, the measurement procedure was developed from an analytical comparison between the specifications of COSMIC and those of the modeling notations selected for this research (i.e. BPMN and Qualigram). This analytical comparison helped to define a set of modeling guidelines for the business application software domain. The comparison also allowed defining a set of mapping rules between the modeling notations’ constructs and the COSMIC concepts. In addition, the modeling guidelines were adapted for their application to the real-time software domain. The measurement procedure was evaluated by comparing its measurement results to those obtained in COSMIC reference case studies. The research results demonstrate that: 1. BPM+ allows generating business process models that represent in a consistent and structured way the needs of various stakeholders. 2. Qualigram notation is better suited to BPM+’s design. In addition, Qualigram notation is preferred to be used for non-IT stakeholders, while BPMN is preferred for IT stakeholders. 3. The measurement procedure was successfully applied using two different notations: Qualigram and BPMN, and in two different software domains: the business application domain and the real-time domain. 4. The accuracy of the measurement procedure is in conformity with all the rules of the ISO 19761 standard

    Development of a framework for the education of software measurement in software engineering undergraduate programs

    Get PDF
    Software measurement programs are hardly adopted in organizations and there is a lack of attention to software measurement in higher education. This research work aims at creating the basis for the enhancement of software measurement education in universities, specifically in software engineering programs at the undergraduate level. The ultimate goal of this work is to facilitate the adoption of software measurement programs in software related organizations. This research project tackles this issue by identifying the software measurement topics that should be prioritized for undergraduate students and developing an educational Framework on the basis of the constructivist approach and the Bloom`s taxonomy to provide guidelines to university teachers. By doing so, university teachers will be provided with tools and approaches to pursue the achievement of learning outcomes by students being introduced to software measurement tasks. This research project required a number of investigations: a comprehensive literature review and a web survey to identify current practices in the teaching of software measurement; a Delphi study to identify priorities in software measurement education for undergraduate students; and an evaluation of the proposed educational framework by university teachers to determine the extent to which it can be adopted. The key results are: • Experts in the field agreed in identifying five essential software measurement topics (priorities) that should be taught to undergraduate students: basic concepts of software measurement; the measurement process; software measurement techniques; software management measures; and measures for the requirement phase. For each of these topics, the participating experts also identified the levels of learning expected to be reached by students, according to the Bloom's taxonomy. Moreover, they suggested the need for instilling in students the development of four important skills during their university studies, including: critical thinking; oral and written communication; and team work. These skills are aimed at complementing the students’ knowledge and practice of software measurement. • The design of an educational framework for the teaching of software measurement. • University teachers evaluating the proposed framework gave favorable opinions regarding its usefulness for teaching software measurement and for facilitating the achievement of learning outcomes by undergraduate students. • A website designed to promote the education on software measurement http://software-measurement-education.espol.edu.ec

    Measures related to social and human factors that influence productivity in software development teams

    Get PDF
    Software companies need to measure their productivity. Measures are useful indicators to evaluate processes, projects, products, and people who are part of software development teams. The results of these measurements are used to make decisions, manage projects, and improve software development and project management processes. This research is based on selecting a set of measures related to social and human factors (SHF) that influence productivity in software development teams and therefore in project management. This research was performed in three steps. In the first step, there was performed a tertiary literature review aimed to identify measures related to productivity. Then, the identified measures were submitted for its evaluation to project management experts and finally, the measures selected by the experts were mapped to the SHF. A set of 13 measures was identified and defined as a key input for designing improvement strategies. The measures have been compared to SHF to evaluate the development team\u27s performance from a more human context and to establish indicators in productivity improvement strategies of software projects. Although the number of productivity measures related to SHF is limited, it was possible to identify the measures used in both traditional and agile contexts

    Linguistic Approaches for Early Measurement of Functional Size from Software Requirements

    Get PDF
    The importance of early effort estimation, resource allocation and overall quality control in a software project has led the industry to formulate several functional size measurement (FSM) methods that are based on the knowledge gathered from software requirements documents. The main objective of this research is to develop a comprehensive methodology to facilitate and automate early measurement of a software's functional size from its requirements document written in unrestricted natural language. For the purpose of this research, we have chosen to use the FSM method developed by the Common Software Measurement International Consortium (COSMIC) and adopted as an international standard by the International Standardization Organization (ISO). This thesis presents a methodology to measure the COSMIC size objectively from various textual forms of functional requirements and also builds conceptual measurement models to establish traceability links between the output measurements and the input requirements. Our research investigates the feasibility of automating every major phase of this methodology with natural language processing and machine learning approaches. The thesis provides a step-by-step validation and demonstration of the implementation of this innovative methodology. It describes the details of empirical experiments conducted to validate the methodology with practical samples of textual requirements collected from both the industry and academia. Analysis of the results show that each phase of our methodology can successfully be automated and, in most cases, leads to an accurate measurement of functional size

    Formal and quantitative approach to non-functional requirements modeling and assessment in software engineering

    Get PDF
    In the software market place, in which functionally equivalent products compete for the same customer, Non Functional Requirements (NFRs) become more important in distinguishing between the competing products. However, in practice, NFRs receive little attention relative to Functional Requirements (FRs). This is mainly because of the nature of these requirements which poses a challenge when taking the choice of treating them earlier in the software development. NFRs are subjective, relative and they become scattered among multiple modules when they are mapped from the requirements domain to the solution space. Furthermore, NFRs can often interact, in the sense that attempts to achieve one NFR can help or hinder the achievement of other NFRs at particular software functionality. Such an interaction creates an extensive network of interdependencies and tradeoffs among NFRs which is not easy to trace or estimate. This thesis contributes towards achieving the goal of managing the attainable scope and the changes of NFRs. The thesis proposes and empirically evaluates a formal and quantitative approach to modeling and assessing NFRs. Central to such an approach is the implementation of the proposed NFRs Ontology for capturing and structuring the knowledge on the software requirements (FRs and NFRs), their refinements, and their interdependencies. In this thesis, we also propose a change management mechanism for tracing the impact of NFRs on the other constructs in the ontology and vice-versa. We provide a traceability mechanism using Datalog expressions to implement queries on the relational model-based representation for the ontology. An alternative implementation view using XML and XQuery is provided as well. In addition, we propose a novel approach for the early requirements-based effort estimation, based on NFRs Ontology. The effort estimation approach complementarily uses one standard functional size measurement model, namely COSMIC, and a linear regression techniqu

    New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs

    Full text link
    Tesis por compendio[EN] Relevance of electronics towards safety of common devices has only been growing, as an ever growing stake of the functionality is assigned to them. But of course, this comes along the constant need for higher performances to fulfill such functionality requirements, while keeping power and budget low. In this scenario, industry is struggling to provide a technology which meets all the performance, power and price specifications, at the cost of an increased vulnerability to several types of known faults or the appearance of new ones. To provide a solution for the new and growing faults in the systems, designers have been using traditional techniques from safety-critical applications, which offer in general suboptimal results. In fact, modern embedded architectures offer the possibility of optimizing the dependability properties by enabling the interaction of hardware, firmware and software levels in the process. However, that point is not yet successfully achieved. Advances in every level towards that direction are much needed if flexible, robust, resilient and cost effective fault tolerance is desired. The work presented here focuses on the hardware level, with the background consideration of a potential integration into a holistic approach. The efforts in this thesis have focused several issues: (i) to introduce additional fault models as required for adequate representativity of physical effects blooming in modern manufacturing technologies, (ii) to provide tools and methods to efficiently inject both the proposed models and classical ones, (iii) to analyze the optimum method for assessing the robustness of the systems by using extensive fault injection and later correlation with higher level layers in an effort to cut development time and cost, (iv) to provide new detection methodologies to cope with challenges modeled by proposed fault models, (v) to propose mitigation strategies focused towards tackling such new threat scenarios and (vi) to devise an automated methodology for the deployment of many fault tolerance mechanisms in a systematic robust way. The outcomes of the thesis constitute a suite of tools and methods to help the designer of critical systems in his task to develop robust, validated, and on-time designs tailored to his application.[ES] La relevancia que la electrónica adquiere en la seguridad de los productos ha crecido inexorablemente, puesto que cada vez ésta copa una mayor influencia en la funcionalidad de los mismos. Pero, por supuesto, este hecho viene acompañado de una necesidad constante de mayores prestaciones para cumplir con los requerimientos funcionales, al tiempo que se mantienen los costes y el consumo en unos niveles reducidos. En este escenario, la industria está realizando esfuerzos para proveer una tecnología que cumpla con todas las especificaciones de potencia, consumo y precio, a costa de un incremento en la vulnerabilidad a múltiples tipos de fallos conocidos o la introducción de nuevos. Para ofrecer una solución a los fallos nuevos y crecientes en los sistemas, los diseñadores han recurrido a técnicas tradicionalmente asociadas a sistemas críticos para la seguridad, que ofrecen en general resultados sub-óptimos. De hecho, las arquitecturas empotradas modernas ofrecen la posibilidad de optimizar las propiedades de confiabilidad al habilitar la interacción de los niveles de hardware, firmware y software en el proceso. No obstante, ese punto no está resulto todavía. Se necesitan avances en todos los niveles en la mencionada dirección para poder alcanzar los objetivos de una tolerancia a fallos flexible, robusta, resiliente y a bajo coste. El trabajo presentado aquí se centra en el nivel de hardware, con la consideración de fondo de una potencial integración en una estrategia holística. Los esfuerzos de esta tesis se han centrado en los siguientes aspectos: (i) la introducción de modelos de fallo adicionales requeridos para la representación adecuada de efectos físicos surgentes en las tecnologías de manufactura actuales, (ii) la provisión de herramientas y métodos para la inyección eficiente de los modelos propuestos y de los clásicos, (iii) el análisis del método óptimo para estudiar la robustez de sistemas mediante el uso de inyección de fallos extensiva, y la posterior correlación con capas de más alto nivel en un esfuerzo por recortar el tiempo y coste de desarrollo, (iv) la provisión de nuevos métodos de detección para cubrir los retos planteados por los modelos de fallo propuestos, (v) la propuesta de estrategias de mitigación enfocadas hacia el tratamiento de dichos escenarios de amenaza y (vi) la introducción de una metodología automatizada de despliegue de diversos mecanismos de tolerancia a fallos de forma robusta y sistemática. Los resultados de la presente tesis constituyen un conjunto de herramientas y métodos para ayudar al diseñador de sistemas críticos en su tarea de desarrollo de diseños robustos, validados y en tiempo adaptados a su aplicación.[CA] La rellevància que l'electrònica adquireix en la seguretat dels productes ha crescut inexorablement, puix cada volta més aquesta abasta una major influència en la funcionalitat dels mateixos. Però, per descomptat, aquest fet ve acompanyat d'un constant necessitat de majors prestacions per acomplir els requeriments funcionals, mentre es mantenen els costos i consums en uns nivells reduïts. Donat aquest escenari, la indústria està fent esforços per proveir una tecnologia que complisca amb totes les especificacions de potència, consum i preu, tot a costa d'un increment en la vulnerabilitat a diversos tipus de fallades conegudes, i a la introducció de nous tipus. Per oferir una solució a les noves i creixents fallades als sistemes, els dissenyadors han recorregut a tècniques tradicionalment associades a sistemes crítics per a la seguretat, que en general oferixen resultats sub-òptims. De fet, les arquitectures empotrades modernes oferixen la possibilitat d'optimitzar les propietats de confiabilitat en habilitar la interacció dels nivells de hardware, firmware i software en el procés. Tot i això eixe punt no està resolt encara. Es necessiten avanços a tots els nivells en l'esmentada direcció per poder assolir els objectius d'una tolerància a fallades flexible, robusta, resilient i a baix cost. El treball ací presentat se centra en el nivell de hardware, amb la consideració de fons d'una potencial integració en una estratègia holística. Els esforços d'esta tesi s'han centrat en els següents aspectes: (i) la introducció de models de fallada addicionals requerits per a la representació adequada d'efectes físics que apareixen en les tecnologies de fabricació actuals, (ii) la provisió de ferramentes i mètodes per a la injecció eficient del models proposats i dels clàssics, (iii) l'anàlisi del mètode òptim per estudiar la robustesa de sistemes mitjançant l'ús d'injecció de fallades extensiva, i la posterior correlació amb capes de més alt nivell en un esforç per retallar el temps i cost de desenvolupament, (iv) la provisió de nous mètodes de detecció per cobrir els reptes plantejats pels models de fallades proposats, (v) la proposta d'estratègies de mitigació enfocades cap al tractament dels esmentats escenaris d'amenaça i (vi) la introducció d'una metodologia automatitzada de desplegament de diversos mecanismes de tolerància a fallades de forma robusta i sistemàtica. Els resultats de la present tesi constitueixen un conjunt de ferramentes i mètodes per ajudar el dissenyador de sistemes crítics en la seua tasca de desenvolupament de dissenys robustos, validats i a temps adaptats a la seua aplicació.Espinosa García, J. (2016). New Fault Detection, Mitigation and Injection Strategies for Current and Forthcoming Challenges of HW Embedded Designs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73146TESISCompendi
    • …
    corecore