29 research outputs found

    Risk based analogy for e-business estimation

    Get PDF

    Development of a prototype for multidimensional performance management in software engineering

    Get PDF
    Managing performance is an important, and difficult, topic, and tools are needed to help organizations manage their performance. Understanding, and improving performance is an important problem. Performance management has become more and more important for organizations, and managers are always on the lookout for better solutions to manage performance within their organizations. One of the most important consequences of not having a Performance Management Framework (PMF) in place is the difficulty of differentiating organizational success from failure over time. Performance Management Frameworks have become important to organizations that need to plan, monitor, control, and improve their decisions. Use of a PMF can show an organization how it is performing and indicate whether or not an organization is going in the right direction to achieve its objectives. Over the years, several frameworks have been developed to address the management of organizational assets, both tangible and intangible. Performance measurement has always mostly been focused on the economic viewpoint. The framework developed by Kaplan and Norton adds three other viewpoints to this, and this addition represents a significant improvement to PMFs. The PMFs currently proposed do not meet the analytical requirements of software engineering management when various viewpoints must be taken into account concurrently. This difficulty is compounded by the fact that the underlying quantitative data are multidimensional, and so the usual two- and three-dimensional approaches to visualization are generally not sufficient to represent such models. Organizations vary considerably in the wide variety of viewpoints that influence their performance, and every organization has their own viewpoints that they want to manage, and which must be represented in a consolidated manner. The purpose of this thesis is to develop a prototype for managing multidimensional performance in software engineering. The thesis begins by defining the important terms or key concepts used in the research: software, performance, management, model, multidimensional, development, engineering, and prototype, and the various associations of these terms. This is followed by a review of the multidimensional PMFs that are specific to software engineering and the generic multidimensional performance models that are available to management. A framework for managing performance in software engineering in four phases: design, implementation, use of the framework, and performance improvement is then presented. Based on this framework, a prototype tool is developed. The prototype notably includes visual analytical tools to manage, interpret, and understand the results in a consolidated manner, while at the same time keeping track of the values of the individual dimensions of performance. The repository of software project data made available by the International Software Benchmarking Standard Group (ISBSG) is integrated into and used by the prototype as well

    Rethinking Productivity in Software Engineering

    Get PDF
    Get the most out of this foundational reference and improve the productivity of your software teams. This open access book collects the wisdom of the 2017 "Dagstuhl" seminar on productivity in software engineering, a meeting of community leaders, who came together with the goal of rethinking traditional definitions and measures of productivity. The results of their work, Rethinking Productivity in Software Engineering, includes chapters covering definitions and core concepts related to productivity, guidelines for measuring productivity in specific contexts, best practices and pitfalls, and theories and open questions on productivity. You'll benefit from the many short chapters, each offering a focused discussion on one aspect of productivity in software engineering. Readers in many fields and industries will benefit from their collected work. Developers wanting to improve their personal productivity, will learn effective strategies for overcoming common issues that interfere with progress. Organizations thinking about building internal programs for measuring productivity of programmers and teams will learn best practices from industry and researchers in measuring productivity. And researchers can leverage the conceptual frameworks and rich body of literature in the book to effectively pursue new research directions. What You'll Learn Review the definitions and dimensions of software productivity See how time management is having the opposite of the intended effect Develop valuable dashboards Understand the impact of sensors on productivity Avoid software development waste Work with human-centered methods to measure productivity Look at the intersection of neuroscience and productivity Manage interruptions and context-switching Who Book Is For Industry developers and those responsible for seminar-style courses that include a segment on software developer productivity. Chapters are written for a generalist audience, without excessive use of technical terminology. ; Collects the wisdom of software engineering thought leaders in a form digestible for any developer Shares hard-won best practices and pitfalls to avoid An up to date look at current practices in software engineering productivit

    Rethinking Productivity in Software Engineering

    Get PDF
    Get the most out of this foundational reference and improve the productivity of your software teams. This open access book collects the wisdom of the 2017 "Dagstuhl" seminar on productivity in software engineering, a meeting of community leaders, who came together with the goal of rethinking traditional definitions and measures of productivity. The results of their work, Rethinking Productivity in Software Engineering, includes chapters covering definitions and core concepts related to productivity, guidelines for measuring productivity in specific contexts, best practices and pitfalls, and theories and open questions on productivity. You'll benefit from the many short chapters, each offering a focused discussion on one aspect of productivity in software engineering. Readers in many fields and industries will benefit from their collected work. Developers wanting to improve their personal productivity, will learn effective strategies for overcoming common issues that interfere with progress. Organizations thinking about building internal programs for measuring productivity of programmers and teams will learn best practices from industry and researchers in measuring productivity. And researchers can leverage the conceptual frameworks and rich body of literature in the book to effectively pursue new research directions. What You'll Learn Review the definitions and dimensions of software productivity See how time management is having the opposite of the intended effect Develop valuable dashboards Understand the impact of sensors on productivity Avoid software development waste Work with human-centered methods to measure productivity Look at the intersection of neuroscience and productivity Manage interruptions and context-switching Who Book Is For Industry developers and those responsible for seminar-style courses that include a segment on software developer productivity. Chapters are written for a generalist audience, without excessive use of technical terminology. ; Collects the wisdom of software engineering thought leaders in a form digestible for any developer Shares hard-won best practices and pitfalls to avoid An up to date look at current practices in software engineering productivit

    A New Methodology for Quantifying the Impact of Non-Functional Requirements on Software Effort Estimation

    Get PDF
    The effort estimation techniques used in the software industry often tend to ignore the impact of Non-functional Requirements (NFR) on effort and reuse standard effort estimation models without local calibration. Moreover, the effort estimation models are calibrated using data of previous projects that may belong to problem domains different from the project which is being estimated. The approach described in this thesis suggests a novel effort estimation methodology that can be used in the early stages of software development projects. The proposed methodology initially clusters the historical data from the previous projects into different problem domains and generates domain specific effort estimation models, each incorporating the impact of NFRs on effort by sets of objectively measured nominal features. The complexity of these models is reduced using a feature subset selection algorithm. In this thesis, our approach is discussed in detail, and the results of our experiments using different supervised machine learning algorithms are presented. The results show that our approach performs well by increasing the correlation coefficient and decreasing the error rate of the generated effort estimation models and achieving more accurate effort estimates for the new projects

    Estimation model for software testing

    Get PDF
    Testing of software applications and assurance of compliance have become an essential part of Information Technology (IT) governance of organizations. Over the years, software testing has evolved into a specialization with its own practices and body of knowledge. Test estimation consists of the estimation of effort and working out the cost for a particular level of testing, using various methods, tools, and techniques. An incorrect estimation often leads to inadequate amount of testing which, in turn, can lead to failures of software systems when they are deployed in organizations. This research work has first established the state of the art of software test estimation, followed by the proposal of a Unified Framework for Software Test Estimation. Using this framework, a number of detailed estimation models have been designed next for functional testing. The ISBSG database has been used to investigate the estimation of software testing. The analysis of the ISBSG data has revealed three test productivity patterns representing economies and diseconomies of scale, based on which the characteristics of the corresponding projects were investigated. The three project groups related to the three productivity patterns were found to be statistically significant, and characterised by application domain, team size, elapsed time, and rigour of verification and validation throughout development. Within each project group, the variations in test efforts could be explained by the activities carried out during the development and processes adopted for testing, in addition to functional size. Two new independent variables, the quality of the development processes (DevQ) and the quality of testing processes (TestQ), were identified as influential in the estimation models. Portfolios of estimation models were built for different data sets using combinations of the three independent variables. At estimation time, an estimator could choose the project group by mapping the characteristics of the project to be estimated to the attributes of the project group, in order to choose the model closest to it. The quality of each model has been evaluated using established criteria such as R2, Adj R2, MRE, MedMRE and Maslow’s Cp. Models have been compared using their predictive performance, adopting new criteria proposed in this research work. Test estimation models using functional size measured in COSMIC Function Points have exhibited better quality and resulted in more accurate estimation, compared to functional size measured in IFPUG Function Points. A prototype software is now developed using statistical “R” programming language, incorporating portfolios of estimation models. This test estimation tool can be used by industry and academia for estimating test efforts

    Evaluating an automated procedure of machine learning parameter tuning for software effort estimation

    Get PDF
    Software effort estimation requires accurate prediction models. Machine learning algorithms have been used to create more accurate estimation models. However, these algorithms are sensitive to factors such as the choice of hyper-parameters. To reduce this sensitivity, automated approaches for hyper-parameter tuning have been recently investigated. There is a need for further research on the effectiveness of such approaches in the context of software effort estimation. These evaluations could help understand which hyper-parameter settings can be adjusted to improve model accuracy, and in which specific contexts tuning can benefit model performance. The goal of this work is to develop an automated procedure for machine learning hyper-parameter tuning in the context of software effort estimation. The automated procedure builds and evaluates software effort estimation models to determine the most accurate evaluation schemes. The methodology followed in this work consists of first performing a systematic mapping study to characterize existing hyper-parameter tuning approaches in software effort estimation, developing the procedure to automate the evaluation of hyper-parameter tuning, and conducting controlled quasi experiments to evaluate the automated procedure. From the systematic literature mapping we discovered that effort estimation literature has favored the use of grid search. The results we obtained in our quasi experiments demonstrated that fast, less exhaustive tuners were viable in place of grid search. These results indicate that randomly evaluating 60 hyper-parameters can be as good as grid search, and that multiple state-of-the-art tuners were only more effective than this random search in 6% of the evaluated dataset-model combinations. We endorse random search, genetic algorithms, flash, differential evolution, and tabu and harmony search as effective tuners.Los algoritmos de aprendizaje automático han sido utilizados para crear modelos con mayor precisión para la estimación del esfuerzo del desarrollo de software. Sin embargo, estos algoritmos son sensibles a factores, incluyendo la selección de hiper parámetros. Para reducir esto, se han investigado recientemente algoritmos de ajuste automático de hiper parámetros. Es necesario evaluar la efectividad de estos algoritmos en el contexto de estimación de esfuerzo. Estas evaluaciones podrían ayudar a entender qué hiper parámetros se pueden ajustar para mejorar los modelos, y en qué contextos esto ayuda el rendimiento de los modelos. El objetivo de este trabajo es desarrollar un procedimiento automatizado para el ajuste de hiper parámetros para algoritmos de aprendizaje automático aplicados a la estimación de esfuerzo del desarrollo de software. La metodología seguida en este trabajo consta de realizar un estudio de mapeo sistemático para caracterizar los algoritmos de ajuste existentes, desarrollar el procedimiento automatizado, y conducir cuasi experimentos controlados para evaluar este procedimiento. Mediante el mapeo sistemático descubrimos que la literatura en estimación de esfuerzo ha favorecido el uso de la búsqueda en cuadrícula. Los resultados obtenidos en nuestros cuasi experimentos demostraron que algoritmos de estimación no-exhaustivos son viables para la estimación de esfuerzo. Estos resultados indican que evaluar aleatoriamente 60 hiper parámetros puede ser tan efectivo como la búsqueda en cuadrícula, y que muchos de los métodos usados en el estado del arte son solo más efectivos que esta búsqueda aleatoria en 6% de los escenarios. Recomendamos el uso de la búsqueda aleatoria, algoritmos genéticos y similares, y la búsqueda tabú y harmónica.Escuela de Ciencias de la Computación e InformáticaCentro de Investigaciones en Tecnologías de la Información y ComunicaciónUCR::Vicerrectoría de Investigación::Sistema de Estudios de Posgrado::Ingeniería::Maestría Académica en Computación e Informátic

    Framework for a service-oriented measurement infrastructure

    Get PDF
    Magdeburg, Univ., Fak. für Informatik, Diss., 2009Martin Kun

    Development of a framework for the education of software measurement in software engineering undergraduate programs

    Get PDF
    Software measurement programs are hardly adopted in organizations and there is a lack of attention to software measurement in higher education. This research work aims at creating the basis for the enhancement of software measurement education in universities, specifically in software engineering programs at the undergraduate level. The ultimate goal of this work is to facilitate the adoption of software measurement programs in software related organizations. This research project tackles this issue by identifying the software measurement topics that should be prioritized for undergraduate students and developing an educational Framework on the basis of the constructivist approach and the Bloom`s taxonomy to provide guidelines to university teachers. By doing so, university teachers will be provided with tools and approaches to pursue the achievement of learning outcomes by students being introduced to software measurement tasks. This research project required a number of investigations: a comprehensive literature review and a web survey to identify current practices in the teaching of software measurement; a Delphi study to identify priorities in software measurement education for undergraduate students; and an evaluation of the proposed educational framework by university teachers to determine the extent to which it can be adopted. The key results are: • Experts in the field agreed in identifying five essential software measurement topics (priorities) that should be taught to undergraduate students: basic concepts of software measurement; the measurement process; software measurement techniques; software management measures; and measures for the requirement phase. For each of these topics, the participating experts also identified the levels of learning expected to be reached by students, according to the Bloom's taxonomy. Moreover, they suggested the need for instilling in students the development of four important skills during their university studies, including: critical thinking; oral and written communication; and team work. These skills are aimed at complementing the students’ knowledge and practice of software measurement. • The design of an educational framework for the teaching of software measurement. • University teachers evaluating the proposed framework gave favorable opinions regarding its usefulness for teaching software measurement and for facilitating the achievement of learning outcomes by undergraduate students. • A website designed to promote the education on software measurement http://software-measurement-education.espol.edu.ec
    corecore