38,586 research outputs found

    Managing design variety, process variety and engineering change: a case study of two capital good firms

    Get PDF
    Many capital good firms deliver products that are not strictly one-off, but instead share a certain degree of similarity with other deliveries. In the delivery of the product, they aim to balance stability and variety in their product design and processes. The issue of engineering change plays an important in how they manage to do so. Our aim is to gain more understanding into how capital good firms manage engineering change, design variety and process variety, and into the role of the product delivery strategies they thereby use. Product delivery strategies are defined as the type of engineering work that is done independent of an order and the specification freedom the customer has in the remaining part of the design. Based on the within-case and cross-case analysis of two capital good firms several mechanisms for managing engineering change, design variety and process variety are distilled. It was found that there exist different ways of (1) managing generic design information, (2) isolating large engineering changes, (3) managing process variety, (4) designing and executing engineering change processes. Together with different product delivery strategies these mechanisms can be placed within an archetypes framework of engineering change management. On one side of the spectrum capital good firms operate according to open product delivery strategies, have some practices in place to investigate design reuse potential, isolate discontinuous engineering changes into the first deliveries of the product, employ ‘probe and learn’ process management principles in order to allow evolving insights to be accurately executed and have informal engineering change processes. On the other side of the spectrum capital good firms operate according to a closed product delivery strategy, focus on prevention of engineering changes based on design standards, need no isolation mechanisms for discontinuous engineering changes, have formal process management practices in place and make use of closed and formal engineering change procedures. The framework should help managers to (1) analyze existing configurations of product delivery strategies, product and process designs and engineering change management and (2) reconfigure any of these elements according to a ‘misfit’ derived from the framework. Since this is one of the few in-depth empirical studies into engineering change management in the capital good sector, our work adds to the understanding on the various ways in which engineering change can be dealt with

    Disposition Choices Based on Energy Footprints instead of Recovery Quota

    Get PDF
    This paper addresses the impact of disposition choices on the energy use of closed-loop supply chains. In a life cycle perspective, energy used in the forward chain which is locked up in the product is recaptured in recovery. High quality recovery replaces virgin production and thereby saves energy. This so called substitution effect is often ignored. Governments worldwide implement Extended Producer Responsibility (EPR). Policies are based on recovery quota and not effective from an energy point of view. This in turn leads to unnecessary emissions of amongst others CO2. This research evaluates current EPR policies and presents six policy alternatives from an energy standpoint. The Pareto-frontier model used is generic and can be applied to other closed loops supply chains under EPR, exploiting the substitution effect. The measures modeled are applied to five WEEE cases. We discuss results, pros an cons of various alternatives and complementary measures that might be taken.extended producer responsibility;disposition;energy perspective;substitution effect;government policies;Pareto efficiency

    The Bayesian sampler : generic Bayesian inference causes incoherence in human probability

    Get PDF
    Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of “noise” in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample

    Taking advantage of hybrid systems for sparse direct solvers via task-based runtimes

    Get PDF
    The ongoing hardware evolution exhibits an escalation in the number, as well as in the heterogeneity, of computing resources. The pressure to maintain reasonable levels of performance and portability forces application developers to leave the traditional programming paradigms and explore alternative solutions. PaStiX is a parallel sparse direct solver, based on a dynamic scheduler for modern hierarchical manycore architectures. In this paper, we study the benefits and limits of replacing the highly specialized internal scheduler of the PaStiX solver with two generic runtime systems: PaRSEC and StarPU. The tasks graph of the factorization step is made available to the two runtimes, providing them the opportunity to process and optimize its traversal in order to maximize the algorithm efficiency for the targeted hardware platform. A comparative study of the performance of the PaStiX solver on top of its native internal scheduler, PaRSEC, and StarPU frameworks, on different execution environments, is performed. The analysis highlights that these generic task-based runtimes achieve comparable results to the application-optimized embedded scheduler on homogeneous platforms. Furthermore, they are able to significantly speed up the solver on heterogeneous environments by taking advantage of the accelerators while hiding the complexity of their efficient manipulation from the programmer.Comment: Heterogeneity in Computing Workshop (2014

    What Does Aspect-Oriented Programming Mean for Functional Programmers?

    Get PDF
    Aspect-Oriented Programming (AOP) aims at modularising crosscutting concerns that show up in software. The success of AOP has been almost viral and nearly all areas in Software Engineering and Programming Languages have become "infected" by the AOP bug in one way or another. Interestingly the functional programming community (and, in particular, the pure functional programming community) seems to be resistant to the pandemic. The goal of this paper is to debate the possible causes of the functional programming community's resistance and to raise awareness and interest by showcasing the benefits that could be gained from having a functional AOP language. At the same time, we identify the main challenges and explore the possible design-space
    • 

    corecore