21,215 research outputs found

    An inventory control project in a major Danish company using compound renewal demand models

    Get PDF
    We describe the development of a framework to compute the optimal inventory policy for a large spare-parts’ distribution centre operation in the RA division of the Danfoss Group in Denmark. The RA division distributes spare parts worldwide for cooling and A/C systems. The warehouse logistics operation is highly automated. However, the procedures for estimating demands and the policies for the inventory control system that were in use at the beginning of the project did not fully match the sophisticated technological standard of the physical system. During the initial phase of the project development we focused on the fitting of suitable demand distributions for spare parts and on the estimation of demand parameters. Demand distributions were chosen from a class of compound renewal distributions. In the next phase, we designed models and algorithmic procedures for determining suitable inventory control variables based on the fitted demand distributions and a service level requirement stated in terms of an order fill rate. Finally, we validated the results of our models against the procedures that had been in use in the company. It was concluded that the new procedures were considerably more consistent with the actual demand processes and with the stated objectives for the distribution centre. We also initiated the implementation and integration of the new procedures into the company’s inventory management systemBase-stock policy; compound distribution; fill rate; inventory control; logistics; stochastic processes

    The Efficiency of Voluntary Incentive Policies for Preventing Biodiversity Loss

    Get PDF
    In this paper we analyze the efficiency of voluntary incentive-based land-use policies for biodiversity conservation. Two factors combine to make it difficult to achieve an efficient result. First, the spatial pattern of habitat across multiple landowners is important for determining biodiversity conservation results. Second, the willingness of private landowners to accept a payment in exchange for enrolling in a conservation program is private information. Therefore, a conservation agency cannot easily control the spatial pattern of voluntary enrollment in conservation programs. We begin by showing how the distribution of a landowner's willingness-to-accept a conservation payment can be derived from a parcel-scale land-use change model. Next we combine the econometric land-use model with spatial data and ecological models to simulate the effects of various conservation program designs on biodiversity conservation outcomes. We compare these results to an estimate of the efficiency frontier that maximizes biodiversity conservation at each level of cost. The frontier mimics the regulator's solution to the biodiversity conservation problem when she has perfect information on landowner willingness-to-accept. Results indicate that there are substantial differences in biodiversity conservation scores generated by the incentive-based policies and efficient solutions. The performance of incentive-based policies is particularly poor at low levels of the conservation budget where spatial fragmentation of conserved parcels is a large concern. Performance can be improved by encouraging agglomeration of conserved habitat and by incorporating basic biological information, such as that on rare habitats, into the selection criteria.

    Achieving Efficiency in Black Box Simulation of Distribution Tails with Self-structuring Importance Samplers

    Full text link
    Motivated by the increasing adoption of models which facilitate greater automation in risk management and decision-making, this paper presents a novel Importance Sampling (IS) scheme for measuring distribution tails of objectives modelled with enabling tools such as feature-based decision rules, mixed integer linear programs, deep neural networks, etc. Conventional efficient IS approaches suffer from feasibility and scalability concerns due to the need to intricately tailor the sampler to the underlying probability distribution and the objective. This challenge is overcome in the proposed black-box scheme by automating the selection of an effective IS distribution with a transformation that implicitly learns and replicates the concentration properties observed in less rare samples. This novel approach is guided by a large deviations principle that brings out the phenomenon of self-similarity of optimal IS distributions. The proposed sampler is the first to attain asymptotically optimal variance reduction across a spectrum of multivariate distributions despite being oblivious to the underlying structure. The large deviations principle additionally results in new distribution tail asymptotics capable of yielding operational insights. The applicability is illustrated by considering product distribution networks and portfolio credit risk models informed by neural networks as examples.Comment: 51 page

    An Efficient Monte Carlo-based Probabilistic Time-Dependent Routing Calculation Targeting a Server-Side Car Navigation System

    Full text link
    Incorporating speed probability distribution to the computation of the route planning in car navigation systems guarantees more accurate and precise responses. In this paper, we propose a novel approach for dynamically selecting the number of samples used for the Monte Carlo simulation to solve the Probabilistic Time-Dependent Routing (PTDR) problem, thus improving the computation efficiency. The proposed method is used to determine in a proactive manner the number of simulations to be done to extract the travel-time estimation for each specific request while respecting an error threshold as output quality level. The methodology requires a reduced effort on the application development side. We adopted an aspect-oriented programming language (LARA) together with a flexible dynamic autotuning library (mARGOt) respectively to instrument the code and to take tuning decisions on the number of samples improving the execution efficiency. Experimental results demonstrate that the proposed adaptive approach saves a large fraction of simulations (between 36% and 81%) with respect to a static approach while considering different traffic situations, paths and error requirements. Given the negligible runtime overhead of the proposed approach, it results in an execution-time speedup between 1.5x and 5.1x. This speedup is reflected at infrastructure-level in terms of a reduction of around 36% of the computing resources needed to support the whole navigation pipeline

    Direct Demand Models of Air Travel: A Novel Approach to the Analysis of Stated Preference Data

    Get PDF
    This paper uses what has been termed the direct demand approach to obtain elasticity estimates from discrete choice Stated Preference data. The Stated Preference data relates to business travellers' choices between air and rail. The direct demand methodology is outlined and some potential advantages over the conventional disaggregate logit model are discussed. However, further research regarding the relative merits of the two approaches is recommended. The direct demand model is developed to explain variations in the demand for air travel as a function of variations in air headway and cost and in train journey time, frequency, interchange and cost. Relatively little has previously been published about the interaction between rail and air and the elasticities and variation in them which have been estimated are generally plausible. In particular, the results show that large improvements in rail journey times can have a very substantial impact on the demand for air travel and that the rail journey time cross-elasticity depends on satisfying a three hour journey time threshold
    corecore