137 research outputs found

    Analyzing impact of experience curve on ROI in the software product line adoption process

    Get PDF
    Cataloged from PDF version of article.Context: Experience curve is a well-known concept in management and education science, which explains the phenomenon of increased worker efficiency with repetitive production of a good or service. Objective: We aim to analyze the impact of the experience curve effect on the Return on Investment (ROI) in the software product line engineering (SPLE) process. Method: We first present the results of a systematic literature review (SLR) to explicitly depict the studies that have considered the impact of experience curve effect on software development in general. Subsequently, based on the results of the SLR, the experience curve effect models in the literature, and the SPLE cost models, we define an approach for extending the cost models with the experience curve effect. Finally, we discuss the application of the refined cost models in a real industrial context. Results: The SLR resulted in 15 primary studies which confirm the impact of experience curve effect on software development in general but the experience curve effect in the adoption of SPLE got less attention. The analytical discussion of the cost models and the application of the refined SPLE cost models in the industrial context showed a clear impact of the experience curve effect on the time-to-market, cost of development and ROI in the SPLE adoption process. Conclusions: The proposed analysis with the newly defined cost models for SPLE adoption provides a more precise analysis tool for the management, and as such helps to support a better decision making. © 2014 Elsevier B.V. All rights reserved

    Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    Get PDF
    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated

    Capturing the Benefits of Worker Specialization: Effects of Managerial and Organizational Task Experience

    Get PDF
    Learning by doing is a fundamental driver of productivity among knowledge workers. As workers accumulate experience working on certain types of tasks (i.e., they become specialized), they also develop proficiency in executing these tasks. However, previous research suggests that organizations may struggle to leverage the knowledge workers accrue through specialization because specialized workers tend to lose interest and reduce effort during task execution. This study investigates how organizations can improve specialized workers’ performance by mitigating the dysfunctional effects of specialization. In particular, we study how other sources of task experiences from the worker's immediate manager as well as the organization itself help manage the relationship between worker specialization and performance. We do so by analyzing a proprietary dataset that comprises of 39,162 software service tasks that 310 employees in a Fortune 100 organization executed under the supervision of 92 managers. Results suggest that the manager role experience (i.e., the manager's experience supervising workers) is instrumental in mitigating the potential negative effect of worker specialization on performance, measured as task execution time. Such influence, however, is contingent on cases in which organizational task experience (i.e., the organization's experience in executing tasks of the same substantive content as the focal task) is limited. Taken together, our research contributes to multiple streams of research and unearths important insights on how multiple sources of experience beyond the workers themselves can help capture the elusive benefits of worker specialization

    Capturing the Benefits of Worker Specialization: Effectsof Managerial and Organizational Task Experience

    Get PDF
    Learning by doing is a fundamental driver of productivity among knowledge workers. As workers accumulate experience working on certain types of tasks (i.e., they become specialized), they also develop proficiency in executing these tasks. However, previous research suggests that organizations may struggle to leverage the knowledge workers accrue through specialization because specialized workers tend to lose interest and reduce effort during task execution. This study investigates how organizations can improve specialized workers’ performance by mitigating the dysfunctional effects of specialization. In particular, we study how other sources of task experiences from the worker’s immediate manager as well as the organization itself help manage the relationship between worker specialization and performance. We do so by analyzing a proprietary dataset that comprises of 39,162 software service tasks that 310 employees in a Fortune 100 organization executed under the supervision of 92 managers. Results suggest that the manager role experience (i.e., the manager’s experience supervising workers) is instrumental in mitigating the potential negative effect of worker specialization on performance, measured as task execution time. Such influence, however, is contingent on cases in which organizational task experience (i.e., the organization’s experience in executing tasks of the same substantive content as the focal task) is limited. Taken together, our research contributes to multiple streams of research and unearths important insights on how multiple sources of experience beyond the workers themselves can help capture the elusive benefits of worker specialization

    The Role of Computers in Research and Development at Langley Research Center

    Get PDF
    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics

    System design and the cost of architectural complexity

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 159-166).Many modern systems are so large that no one truly understands how they work. It is well known in the engineering community that architectural patterns (including hierarchies, modules, and abstraction layers) should be used in design because they play an important role in controlling complexity. These patterns make a system easier to evolve and keep its separate portions within the bounds of human understanding so that distributed teams can operate independently while jointly fashioning a coherent whole. This study set out to measure the link between architectural complexity (the complexity that arises within a system due to a lack or breakdown of hierarchy or modularity) and a variety of costs incurred by a development organization. A study was conducted within a successful software firm. Measures of architectural complexity were taken from eight versions of their product using techniques recently developed by MacCormack, Baldwin, and Rusnak. Significant cost drivers including defect density, developer productivity, and staff turnover were measured as well. The link between cost and complexity was explored using a variety of statistical techniques. Within this research setting, we found that differences in architectural complexity could account for 50% drops in productivity, three-fold increases in defect density, and order-of-magnitude increases in staff turnover. Using the techniques developed in this thesis, it should be possible for firms to estimate the financial cost of their complexity by assigning a monetary value to the decreased productivity, increased defect density, and increased turnover it causes. As a result, it should be possible for firms to more accurately estimate the potential dollar-value of refactoring efforts aimed at improving architecture.by Daniel J. Sturtevant.Ph.D

    Theoretical and methodological advances in semi-supervised learning and the class-imbalance problem.

    Get PDF
    201 p.Este trabajo se centra en la generalización teórica y práctica de dos situaciones desafiantes y conocidas del campo del aprendizaje automático a problemas de clasificación en los cuales la suposición de tener una única clase binaria no se cumple.Aprendizaje semi-supervisado es una técnica que usa grandes cantidades de datos no etiquetados para, así, mejorar el rendimiento del aprendizaje supervisado cuando el conjunto de datos etiquetados es muy acotado. Concretamente, este trabajo contribuye con metodologías potentes y computacionalmente eficientes para aprender, de forma semi-supervisada, clasificadores para múltiples variables clase. También, se investigan, de forma teórica, los límites fundamentales del aprendizaje semi-supervisado en problemas multiclase.El problema de desbalanceo de clases aparece cuando las variables objetivo presentan una distribución de probabilidad lo suficientemente desbalanceada como para desvirtuar las soluciones propuestas por los algoritmos de aprendizaje supervisado tradicionales. En este proyecto, se propone un marco teórico para separar la desvirtuación producida por el desbalanceo de clases de otros factores que afectan a la precisión de los clasificadores. Este marco es usado principalmente para realizar una recomendación de métricas de evaluación de clasificadores en esta situación. Por último, también se propone una medida del grado de desbalanceo de clases en un conjunto de datos correlacionada con la pérdida de precisión ocasionada.Intelligent Systems Grou

    Theoretical and Methodological Advances in Semi-supervised Learning and the Class-Imbalance Problem

    Get PDF
    his paper focuses on the theoretical and practical generalization of two known and challenging situations from the field of machine learning to classification problems in which the assumption of having a single binary class is not fulfilled.semi-supervised learning is a technique that uses large amounts of unlabeled data to improve the performance of supervised learning when the labeled data set is very limited. Specifically, this work contributes with powerful and computationally efficient methodologies to learn, in a semi-supervised way, classifiers for multiple class variables. Also, the fundamental limits of semi-supervised learning in multi-class problems are investigated in a theoretical way. The problem of class unbalance appears when the target variables present a probability distribution unbalanced enough to distort the solutions proposed by the traditional supervised learning algorithms. In this project, a theoretical framework is proposed to separate the deviation produced by class unbalance from other factors that affect the accuracy of classifiers. This framework is mainly used to make a recommendation of classifier assessment metrics in this situation. Finally, a measure of the degree of class unbalance in a data set correlated with the loss of accuracy caused is also proposed

    Sampling high-dimensional design spaces for analysis and optimization

    Get PDF
    • …
    corecore