67,668 research outputs found

    Framework for continuous improvement of production processes

    Get PDF
    This research introduces a new approach of using Six Sigma DMAIC (Define, Measure, Analyse, Improve, Control) methodology. This approach integrates various tools and methods into a single framework, which consists of five steps. In the Define step, problems and main Key Performance Indicators (KPIs) are identified. In the Measure step, the modified Failure Classifier (FC), i.e. DOE-NE-STD-1004-92 is applied, which enables to specify the types of failures for each operation during the production process. Also, Failure Mode and Effect Analysis (FMEA) is used to measure the weight of failures by calculating the Risk Priority Number (RPN) value. In order to indicate the quality level of process/product the Process/Product Sigma Performance Level (PSPL) is calculated based on the FMEA results. Using the RPN values from FMEA the variability of process by failures, operations and work centres are observed. In addition, costs of the components are calculated, which enable to measure the impact of failures on the final product cost. A new method of analysis is introduced, in which various charts created in the Measure step are compared. Analysis step facilitates the subsequent Improve and Control steps, where appropriate changes in the manufacturing process are implemented and sustained. The objective of the new framework is to perform continuous improvement of production processes in the way that enables engineers to discover the critical problems that have financial impact on the final product. This framework provides new ways of monitoring and eliminating failures for production processes continuous improvement, by focusing on the KPIs important for business success. In this paper, the background and the key concepts of Six Sigma are described and the proposed Six Sigma DMAIC framework is explained. The implementation of this framework is verified by computational experiment followed by conclusion section

    Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond

    Full text link
    We present a graphical and dynamic framework for binding and execution of business) process models. It is tailored to integrate 1) ad hoc processes modeled graphically, 2) third party services discovered in the (Inter)net, and 3) (dynamically) synthesized process chains that solve situation-specific tasks, with the synthesis taking place not only at design time, but also at runtime. Key to our approach is the introduction of type-safe stacked second-order execution contexts that allow for higher-order process modeling. Tamed by our underlying strict service-oriented notion of abstraction, this approach is tailored also to be used by application experts with little technical knowledge: users can select, modify, construct and then pass (component) processes during process execution as if they were data. We illustrate the impact and essence of our framework along a concrete, realistic (business) process modeling scenario: the development of Springer's browser-based Online Conference Service (OCS). The most advanced feature of our new framework allows one to combine online synthesis with the integration of the synthesized process into the running application. This ability leads to a particularly flexible way of implementing self-adaption, and to a particularly concise and powerful way of achieving variability not only at design time, but also at runtime.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    On the structure of problem variability: From feature diagrams to problem frames

    Get PDF
    Requirements for product families are expressed in terms of commonality and variability. This distinction allows early identification of an appropriate software architecture and opportunities for software reuse. Feature diagrams provide intuitive notations and techniques for representing requirements in product line development. In this paper, we observe that feature diagrams tend to obfuscate three important descriptions: requirements, domain properties and specifications. As a result, feature diagrams do not adequately capture the problem structures that underlie variability, and inform the solution structures of their complexity. With its emphasis on separation of the three descriptions, the problem frames approach provides a conceptual framework for a more detailed analysis of variability and its structure. With illustrations from an example, we demonstrate how problem frames analysis of variability can augment feature diagrams

    An Empirical Study on Decision making for Quality Requirements

    Full text link
    [Context] Quality requirements are important for product success yet often handled poorly. The problems with scope decision lead to delayed handling and an unbalanced scope. [Objective] This study characterizes the scope decision process to understand influencing factors and properties affecting the scope decision of quality requirements. [Method] We studied one company's scope decision process over a period of five years. We analyzed the decisions artifacts and interviewed experienced engineers involved in the scope decision process. [Results] Features addressing quality aspects explicitly are a minor part (4.41%) of all features handled. The phase of the product line seems to influence the prevalence and acceptance rate of quality features. Lastly, relying on external stakeholders and upfront analysis seems to lead to long lead-times and an insufficient quality requirements scope. [Conclusions] There is a need to make quality mode explicit in the scope decision process. We propose a scope decision process at a strategic level and a tactical level. The former to address long-term planning and the latter to cater for a speedy process. Furthermore, we believe it is key to balance the stakeholder input with feedback from usage and market in a more direct way than through a long plan-driven process

    A make/buy/reuse feature development framework for product line evolution

    Get PDF

    Boundary Objects and their Use in Agile Systems Engineering

    Full text link
    Agile methods are increasingly introduced in automotive companies in the attempt to become more efficient and flexible in the system development. The adoption of agile practices influences communication between stakeholders, but also makes companies rethink the management of artifacts and documentation like requirements, safety compliance documents, and architecture models. Practitioners aim to reduce irrelevant documentation, but face a lack of guidance to determine what artifacts are needed and how they should be managed. This paper presents artifacts, challenges, guidelines, and practices for the continuous management of systems engineering artifacts in automotive based on a theoretical and empirical understanding of the topic. In collaboration with 53 practitioners from six automotive companies, we conducted a design-science study involving interviews, a questionnaire, focus groups, and practical data analysis of a systems engineering tool. The guidelines suggest the distinction between artifacts that are shared among different actors in a company (boundary objects) and those that are used within a team (locally relevant artifacts). We propose an analysis approach to identify boundary objects and three practices to manage systems engineering artifacts in industry

    Towards guidelines for building a business case and gathering evidence of software reference architectures in industry

    Get PDF
    Background: Software reference architectures are becoming widely adopted by organizations that need to support the design and maintenance of software applications of a shared domain. For organizations that plan to adopt this architecture-centric approach, it becomes fundamental to know the return on investment and to understand how software reference architectures are designed, maintained, and used. Unfortunately, there is little evidence-based support to help organizations with these challenges. Methods: We have conducted action research in an industry-academia collaboration between the GESSI research group and everis, a multinational IT consulting firm based in Spain. Results: The results from such collaboration are being packaged in order to create guidelines that could be used in similar contexts as the one of everis. The main result of this paper is the construction of empirically-grounded guidelines that support organizations to decide on the adoption of software reference architectures and to gather evidence to improve RA-related practices. Conclusions: The created guidelines could be used by other organizations outside of our industry-academia collaboration. With this goal in mind, we describe the guidelines in detail for their use.Peer ReviewedPostprint (published version
    • …
    corecore