1,145 research outputs found
Managing design variety, process variety and engineering change: a case study of two capital good firms
Many capital good firms deliver products that are not strictly one-off, but instead share a certain degree of similarity with other deliveries. In the delivery of the product, they aim to balance stability and variety in their product design and processes. The issue of engineering change plays an important in how they manage to do so. Our aim is to gain more understanding into how capital good firms manage engineering change, design variety and process variety, and into the role of the product delivery strategies they thereby use. Product delivery strategies are defined as the type of engineering work that is done independent of an order and the specification freedom the customer has in the remaining part of the design. Based on the within-case and cross-case analysis of two capital good firms several mechanisms for managing engineering change, design variety and process variety are distilled. It was found that there exist different ways of (1) managing generic design information, (2) isolating large engineering changes, (3) managing process variety, (4) designing and executing engineering change processes. Together with different product delivery strategies these mechanisms can be placed within an archetypes framework of engineering change management. On one side of the spectrum capital good firms operate according to open product delivery strategies, have some practices in place to investigate design reuse potential, isolate discontinuous engineering changes into the first deliveries of the product, employ ‘probe and learn’ process management principles in order to allow evolving insights to be accurately executed and have informal engineering change processes. On the other side of the spectrum capital good firms operate according to a closed product delivery strategy, focus on prevention of engineering changes based on design standards, need no isolation mechanisms for discontinuous engineering changes, have formal process management practices in place and make use of closed and formal engineering change procedures. The framework should help managers to (1) analyze existing configurations of product delivery strategies, product and process designs and engineering change management and (2) reconfigure any of these elements according to a ‘misfit’ derived from the framework. Since this is one of the few in-depth empirical studies into engineering change management in the capital good sector, our work adds to the understanding on the various ways in which engineering change can be dealt with
A characterization of attribute evaluation in passes
This paper describes the evaluation of semantic attributes in a bounded number of passes from left-to-right and/or from right-to-left over the derivation tree of a program. Evaluation strategies where different instances of the same attribute in any derivation tree are restricted to be evaluated in one pass, with for every derivation tree the same pass number, are referred to as simple multi-pass whereas the unrestricted pass-oriented strategies are referred to as pure multi-pass.\ud
\ud
A graph theoretic characterization is given, showing in which cases an attribute grammar meets the simple multi-pass requirements and what are the minimal pass numbers of its attributes for a given sequence of pass directions. For the special cases where only left-to-right passes are made or where left-to-right and right-to-left passes strictly alternate, new algorithms are developed that associate minimal pass numbers with attributes and indicate in case of failure the attributes that cause the rejection of the grammar. Mixing of a simple multi-pass strategy with other evaluation strategies, in case the grammar is not simple multi-pass, is discussed
Concurrent incremental attribute evaluation
The design of a concurrent incremental combined static/dynamic attribute evaluator is presented. The static part is an incremental version of the ordered attribute evaluation scheme. The dynamic part is an incremental version of the dynamic evaluation scheme.To remove the restriction that every transformation of an attributed syntax tree should immediately be followed by a reevaluation of the tree, criteria have been formulated which permit a delay in calling the reevaluator. These criteria allow multiple asynchronous tree transformations and multiple asynchronous reevaluations. Transformation and reevaluation processes are distributed over regions of the tree. Each region is either in its transformation phase or in its reevaluation phase. Different regions can be in different phases at the same time
One-pass transformations of attributed program trees
The classical attribute grammar framework can be extended by allowing the specification of tree transformation rules. A tree transformation rule consists of an input template, an output template, enabling conditions which are predicates on attribute instances of the input template, and re-evaluation rules which define the values of attribute instances of the output template. A tree transformation may invalidate attribute instances which are needed for additional transformations.\ud
\ud
In this paper we investigate whether consecutive tree transformations and attribute re-evaluations are safely possible during a single pass over the derivation tree. This check is made at compiler generation time rather than at compilation time.\ud
\ud
A graph theoretic characterization of attribute dependencies is given, showing in which cases the recomputation of attribute instances can be done in parallel with tree transformations
Code Generation = A* + BURS
A system called BURS that is based on term rewrite systems and a search algorithm A* are combined to produce a code generator that generates optimal code. The theory underlying BURS is re-developed, formalised and explained in this work. The search algorithm uses a cost heuristic that is derived from the termrewrite system to direct the search. The advantage of using a search algorithm is that we need to compute only those costs that may be part of an optimal rewrite sequence
Tailoring the Engineering Design Process Through Data and Process Mining
Engineering changes (ECs) are new product development activities addressing external or internal challenges, such as market demand, governmental regulations, and competitive reasons. The corresponding EC processes, although perceived as standard, can be very complex and inefficient. There seem to be significant differences between what is the “officially” documented and the executed process. To better understand this complexity, we propose a data-driven approach, based on advanced text analytics and process and data mining techniques. Our approach sets the first steps toward an automatic analysis, extracting detailed events from an unstructured event log, which is necessary for an in-depth understanding of the EC process. The results show that the predictive accuracy associated with certain EC types is high, which assures the method applicability. The contribution of this article is threefold: 1) a detailed model representation of the actual EC process is developed, revealing problematic process steps (such as bottleneck departments); 2) homogeneous, complexity-based EC types are determined (ranging from “standard” to “complex” processes); and 3) process characteristics serving as predictors for EC types are identified (e.g., the sequence of initial process steps determines a “complex” process). The proposed approach facilitates process and product innovation, and efficient design process management in future projects
The impact of design debugging on new product development speed : the significance of improvisational and trial-and-error learning
Investigating the antecedents of cycle time reduction is a continuing concern within new product development (NPD) research (Chen et al., 2010; Cankurtaran et al., 2013). A number of researchers have reported the effects of team learning on NPD speed (Dayan and Di Benedetto, 2008; Cankurtaran et al., 2013), while others relate learning to overall team performance (Magni et al., 2013). However, few studies have systematically researched the effects of improvisation and trial-and-error learning on NPD cycle time. The aim of this study is to shine new light on NPD learning and cycle time reduction through an examination of the effects of improvisation and trial-and-error. To that end, this study conceptualizes and tests the settings wherein improvisation and trial-and-error might contribute or hinder NPD cycle time reduction. The authors develop hypothesis to investigate the effects of improvisation - and trial-and-error learning on NPD cycle time. Based on a review of the literature and in-depth interviews measures are defined to approximate improvisation and trial-and-error using secondary data from over 200 projects with absolute objective measures of cycle time. In addition, 1000s archival records of debugging incidents and engineering changes are used to approximate the impact of improvisation and trial-and-error. To estimate their impact on cycle time a learning curve model is developed (Argote, 2012) which offers an effective way of identifying the conditions that drive cycle time learning and performance (Wiersma, 2007). Based on this model the hypotheses are tested. The findings suggest that improvisation and trial-and-error contribute to cycle time learning in the prototyping and pilot phases only, and that they hinder learning during later stages in the NPD process. These findings contribute to the extant literature by providing an important new organizational learning perspective on NPD speed. The study contributes to practice by relating firms’ improvisation and trial-and-error practices to learning and speed performance
Quantifying the impact of product changes on manufacturing performance
Every adjustment to a physical product disrupts the manufacturing organization, requiring adaptation in tools and processes. The resulting disruption to manufacturing performance is poorly understood. We use design structure matrices and a complexity metric to quantify the complexity and change of product architecture in an explorative, small scale experiment. Based on the results we develop two propositions to guide further research into the factors that affect the shape of consecutive learning curves upon product changes. The first proposition is that after product change, the complexity of the novel part of product architecture is responsible for the initial decrease in manufacturing performance. Second, we propose that the asymptote of a learning curve and the complexity of a product’s architecture are inversely related.Every adjustment to a physical product disrupts the manufacturing organization, requiring adaptation in tools and processes. The resulting disruption to manufacturing performance is poorly understood. We use design structure matrices and a complexity metric to quantify the complexity and change of product architecture in an explorative, small-scale experiment. Based on the results we develop two propositions to guide further research into the factors that affect the shape of consecutive learning curves upon product changes. The first proposition is that after product change, the complexity of the novel part of product architecture is responsible for the initial decrease in manufacturing performance. Second, we propose that the asymptote of a learning curve and the complexity of a product’s architecture are inversely related.</p
- …