12 research outputs found

    Redocumentation through design pattern recovery:: an investigation and an implementation

    Get PDF
    In this thesis, two methods are developed in an aid to help users capture valuable design information and knowledge and reuse them. They are the design pattern recovery (DPR) method and pattern-based redocumentation (PBR) method. The DPR method is for matching up metrics of patterns with patterns themselves in order to capture valuable design information. Patterns are used as a container for storing the information. Two new metrics, i.e., p-value and s-value are introduced. They are obtained by analysing product metrics statistically. Once patterns have been detected from a system, the system can be redocumented using these patterns. Some existing XML (extensible Markup Language) technologies are utilised in order to realise the PRB method. Next, a case study is carried out to validate the soundness and usefulness of the DPR method. Finally, some conclusions drawn from this research are summarised, and further work is suggested for the researchers in software engineering

    Proceedings of the Ninth Annual Software Engineering Workshop

    Get PDF
    Experiences in measurement, utilization, and evaluation of software methodologies, models, and tools are discussed. NASA's involvement in ever larger and more complex systems, like the space station project, provides a motive for the support of software engineering research and the exchange of ideas in such forums. The topics of current SEL research are software error studies, experiments with software development, and software tools

    Software Engineering Laboratory Series: Proceedings of the Twentieth Annual Software Engineering Workshop

    Get PDF
    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document

    Proceedings of the 19th Annual Software Engineering Workshop

    Get PDF
    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of applications software. The goals of the SEL are: (1) to understand the software development process in the GSFC environment; (2) to measure the effects of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that include this document

    Software engineering risk management : a method, improvement framework, and empirical evaluation

    Get PDF
    This dissertation presents a method for software risk management, its improvement framework, and results from its empirical evaluations. More specifically, our objectives were: Develop a comprehensive, theoretically sound, and practical method for software engineering risk management. Develop a framework and supporting software tools for the continuous improvement of software engineering risk management and for improving knowledge about risks. Evaluate the method in practice to provide information on its feasibility, effectiveness, advantages and disadvantages, and to improve it. Although risk management has been considered an important issue in software development and significant contributions to risk management have been made over the past decade, risk management is rarely actively and explicitly applied in practice. Furthermore, most risk management approaches in software engineering use simplistic approaches and fail to account for the biases common in risk perception. We have developed a method, called Riskit, that complements existing risk management approaches by supporting qualitative and structured analysis of risks through a graphical modeling formalism. The method supports multiple stakeholder views to risks by considering their potential utility losses. The Riskit method is comprehensive, i.e., it supports all aspects of risk analysis and risk management planning in a software development project. We propose that our method has a sound theoretical foundation, avoids common biases in risk evaluations, and results in a more thorough understanding of the risks than traditional approaches. Associated with the method, we have also developed a risk management improvement framework that supports continuous, systematic improvement of the risk management process. The improvement framework is based on the Quality Improvement Paradigm, and is supported by the eRiskit application. The eRiskit application supports the management of risks while simultaneously acting as a risk management repository that captures risk management data for improvement purposes. The eRiskit application also acted as a proof of concept for the correctness of the underlying concepts in the Riskit method. We have validated the feasibility and effectiveness of the Riskit method in a series of empirical studies. The empirical studies were designed to provide characterization information and feedback on the method, as well as to act as initial validation of the method. The empirical evaluations showed that the method is feasible in industrial context and it seemed to improve participants' confidence in risk management results. In addition, our research indicates that industry needs sound, systematic, yet cost effective methods for risk management, a common and customized approach to improve communications within an organization, and support and enforcement of the common approach.reviewe

    Improving enterprise decision-making : the benefits of metric commonality

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 94-97).The objective of this research is to identify a new approach in managing, and making internal program-level decisions from, externally tracked performance metrics. Industry observations indicate the increasing challenge for program managers and internal development teams to identify performance improvement opportunities for products, services, organizations, etc., in an effective and efficient manner based on tracked performance metrics by external customers. Literature on metrics; performance measurement selection, systems, and frameworks; the concept of commonality; and designing across a life cycle is assessed and helps generate a new concept of commonalizing metrics across an operating life cycle to address this issue. It is hypothesized that despite the uniqueness of each external stakeholder, the tracking of a small set of common performance metrics at different operating life cycle phases across all external stakeholders would result in more accurate decision-making in identifying the most value-added performance improvement opportunities, increased enterprise-level communication, and lower incurred costs. A detailed case study of a technical product with multiple customers whose external data drives internal program decisions is presented to address (1) if metric commonality is plausible, (2) what the expected benefits are of implementing this new decision-making tool, and (3) how these common metrics would change over the course of the product's operating life cycle. A historical data analysis and initial customer interviews established the architecture of the program's current state. Internal development team expert interviews and a second round of customer interviews were performed in an effort to identify an optimal set of common metrics the external stakeholders could track for this program. Also identified were proper adoption attributes that would need to be considered to not only drive this new decision-making tool through this enterprise, but also to address some of the barriers that influenced the program's current state. The triangulation of the historical, developer, and customer data sets produced a list of less than a dozen common, value-added metrics for this program, with most of these metrics consistently measured throughout the operating life cycle, supporting the plausibility of this new decision-making tool. Having all stakeholders recording the same metrics also improves the efficiency and effectiveness of making the right product improvement decisions, as well as increases communication within the product community. The study also provides insight into the importance of the voice of the customer, the relationship between metrics and strategic planning, the connection to lean thinking, and a new performance measurement framework; and is considered an excellent starting point for future detailed studies in this area.by Alissa H. Friedman.S.M

    DEVELOPMENT OF A QUALITY MANAGEMENT ASSESSMENT TOOL TO EVALUATE SOFTWARE USING SOFTWARE QUALITY MANAGEMENT BEST PRACTICES

    Get PDF
    Organizations are constantly in search of competitive advantages in today’s complex global marketplace through improvement of quality, better affordability, and quicker delivery of products and services. This is significantly true for software as a product and service. With other things being equal, the quality of software will impact consumers, organizations, and nations. The quality and efficiency of the process utilized to create and deploy software can result in cost and schedule overruns, cancelled projects, loss of revenue, loss of market share, and loss of consumer confidence. Hence, it behooves us to constantly explore quality management strategies to deliver high quality software quickly at an affordable price. This research identifies software quality management best practices derived from scholarly literature using bibliometric techniques in conjunction with literature review, synthesizes these best practices into an assessment tool for industrial practitioners, refines the assessment tool based on academic expert review, further refines the assessment tool based on a pilot test with industry experts, and undertakes industry expert validation. Key elements of this software quality assessment tool include issues dealing with people, organizational environment, process, and technology best practices. Additionally, weights were assigned to issues of people, organizational environment, process, and technology best practices based on their relative importance, to calculate an overall weighted score for organizations to evaluate where they stand with respect to their peers in pursuing the business of producing quality software. This research study indicates that people best practices carry 40% of overall weight, organizational best v practices carry 30% of overall weight, process best practices carry 15% of overall weight, and technology best practices carry 15% of overall weight. The assessment tool that is developed will be valuable to organizations that seek to take advantage of rapid innovations in pursuing higher software quality. These organizations can use the assessment tool for implementing best practices based on the latest cutting edge management strategies that can lead to improved software quality and other competitive advantages in the global marketplace. This research contributed to the current academic literature in software quality by presenting a quality assessment tool based on software quality management best practices, contributed to the body of knowledge on software quality management, and expanded the knowledgebase on quality management practices. This research also contributed to current professional practice by incorporating software quality management best practices into a quality management assessment tool to evaluate software

    How Decision Makers Learn to Choose Organizational Performance Measures

    Get PDF
    This study, framed by decision making, program theory, and performance measurement theory, explored the knowledge and experience that enable decision makers to identify organizational performance measures. It used a mixed method, exploratory sequential research design to discover the experience, knowledge, and skills (EKS) senior decision makers felt were important in learning to choose organizational performance measures. From the analyzed interviews, a survey was designed to measure the importance of the EKS characteristics. Qualitative analysis identified 55 life, work, or educational experience; knowledge; or skill characteristics and 23 effective measure characteristics. Regression analysis and PCA were used to extract 6 components. One-way ANOVA found no significant differences in these factors between gender groups, age groups, and process complexity levels, but found differences for decision-making tenure. MANOVA found no significant differences by the same dimensions. The limited sample size and high number of variables confounded component extraction. Further research with a suitable sample size is required before findings can be generalized
    corecore