400 research outputs found

    Sales forecasting in times of crises at DSM

    Get PDF
    A system dynamics model has been developed in order predict demand development throughout the supply chain in times of crises. Good insights by using this type of modeling enable managers to make the right decisions and to gain competitive advantage out of the crisis. Using a system dynamics model described in this best practice, DSM was able to predict its sales with astonishing accuracy, and came stronger out of the crisis

    Carbon emissions mapping at Unilever Europe : implementing a structural method to map and reduce carbon emissions

    Get PDF
    In 2007, the CEO of Unilever committed to a 25% reduction of CO2 emissions from global manufacturing operations in 2012. Unilever Europe Logistics has aligned to this target. To achieve this objective, the management of European logistics department decided to build a carbon emission estimation methodology to quantify the CO2 emissions, emitted from the sourcing units to the distribution centers. In cooperation with the Technical University of Eindhoven, a new methodology was developed that allows transport-buying companies to estimate the CO2 emissions in transport (Ă–zsalih, 2009). A major advantage of developing your own carbon tool is that you can ensure that the tool fits with current working procedures, routines, information streams and data availability. This best practice describes how Unilever Europe managed this. The developed methodology supports Unilever Europe in achieving their ambitious sustainability targets

    Spare parts planning at ASML

    Get PDF
    Key to successful provisioning of high tech spare parts is the use of advanced planning methods. This best practice discusses how ASML organized its global spare parts network, using a custom-made planning method. Doing so, ASML has further improved their service level, whilst simultaneously reducing total costs. Instead of being an organizational burden, providing service is now a distinguishing competitive factor

    Team success in large organizations

    Get PDF

    Cluster-based Kriging approximation algorithms for complexity reduction

    Get PDF
    Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a well-defined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.Algorithms and the Foundations of Software technolog

    Stochastic excitation of acoustic modes in stars

    Full text link
    For more than ten years, solar-like oscillations have been detected and frequencies measured for a growing number of stars with various characteristics (e.g. different evolutionary stages, effective temperatures, gravities, metal abundances ...). Excitation of such oscillations is attributed to turbulent convection and takes place in the uppermost part of the convective envelope. Since the pioneering work of Goldreich & Keely (1977), more sophisticated theoretical models of stochastic excitation were developed, which differ from each other both by the way turbulent convection is modeled and by the assumed sources of excitation. We review here these different models and their underlying approximations and assumptions. We emphasize how the computed mode excitation rates crucially depend on the way turbulent convection is described but also on the stratification and the metal abundance of the upper layers of the star. In turn we will show how the seismic measurements collected so far allow us to infer properties of turbulent convection in stars.Comment: Notes associated with a lecture given during the fall school organized by the CNRS and held in St-Flour (France) 20-24 October 2008 ; 39 pages ; 11 figure

    Presupernova Structure of Massive Stars

    Full text link
    Issues concerning the structure and evolution of core collapse progenitor stars are discussed with an emphasis on interior evolution. We describe a program designed to investigate the transport and mixing processes associated with stellar turbulence, arguably the greatest source of uncertainty in progenitor structure, besides mass loss, at the time of core collapse. An effort to use precision observations of stellar parameters to constrain theoretical modeling is also described.Comment: Proceedings for invited talk at High Energy Density Laboratory Astrophysics conference, Caltech, March 2010. Special issue of Astrophysics and Space Science, submitted for peer review: 7 pages, 3 figure

    Sequential design of computer experiments for the estimation of a probability of failure

    Full text link
    This paper deals with the problem of estimating the volume of the excursion set of a function f:Rd→Rf:\mathbb{R}^d \to \mathbb{R} above a given threshold, under a probability measure on Rd\mathbb{R}^d that is assumed to be known. In the industrial world, this corresponds to the problem of estimating a probability of failure of a system. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited and therefore classical Monte Carlo methods ought to be avoided. One of the main contributions of this article is to derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. These sequential strategies use a Gaussian process model of ff and aim at performing evaluations of ff as efficiently as possible to infer the value of the probability of failure. We compare these strategies to other strategies also based on a Gaussian process model for estimating a probability of failure.Comment: This is an author-generated postprint version. The published version is available at http://www.springerlink.co
    • …
    corecore