2,715 research outputs found

    Multistage Stochastic Portfolio Optimisation in Deregulated Electricity Markets Using Linear Decision Rules

    Get PDF
    The deregulation of electricity markets increases the financial risk faced by retailers who procure electric energy on the spot market to meet their customers’ electricity demand. To hedge against this exposure, retailers often hold a portfolio of electricity derivative contracts. In this paper, we propose a multistage stochastic mean-variance optimisation model for the management of such a portfolio. To reduce computational complexity, we perform two approximations: stage-aggregation and linear decision rules (LDR). The LDR approach consists of restricting the set of decision rules to those affine in the history of the random parameters. When applied to mean-variance optimisation models, it leads to convex quadratic programs. Since their size grows typically only polynomially with the number of periods, they can be efficiently solved. Our numerical experiments illustrate the value of adaptivity inherent in the LDR method and its potential for enabling scalability to problems with many periods.OR in energy, electricity portfolio management, stochastic programming, risk management, linear decision rules

    DATA MINING CLUSTERING IN HEALTHCARE

    Get PDF
    The accumulating amounts of data are making traditional analysis methods impractical. Novel tools employed in Data Mining (DM) provide a useful alternative framework that addresses this problem. This research suggests a technique to identify certain patient populations. Our model examines the patient population and clusters certain groups. Those subpopulations are then classified in terms of their appropriate medical treatment. As a result, we show the value of applying a DM model to more easily identify patients

    Quality issues impacting production planning

    Get PDF
    Among the various problems affecting production processes, the unpredictability of quality factors is one of the main issues which concern manufacturing enterprises. In make-to-order or in perishable good production systems, the gap between expected and real output quality increases product cost mainly in two different ways: through the costs of extra production or reworks due to the presence of non-compliant items and through the costs originating from inefficient planning and the need of unscheduled machine changeovers. While the first are relatively easy to compute, even ex-ante, the latter are much more difficult to estimate because they depend on several planning variables such as lot size, sequencing, deliveries due dates, etc. This paper specifically addresses this problem in a make-to-order multi-product customized production system; here, the enterprise diversifies each production lot due to the fact that each order is based on the customer specific requirements and it is unique (in example, packaging or textiles and apparel industry). In these contexts, using a rule-of-thumb in overestimating the input size may cause high costs because all the excess production will generate little or no revenues on top of contributing to increasing wastes in general. On the other hand, the underestimation of the lots size is associated to the eventual need of launching a new, typically very small production order, thus a single product will bear twice the changeover costs. With little markups, it may happen that these extra costs can reduce profit to zero. Aim of this paper is to provide a critical analysis of the literature state-of-art while introducing some elements that can help the definition of lot-sizing policies considering poor quality costs

    Quality issues impacting production planning

    Get PDF
    Among the various problems affecting production processes, the unpredictability of quality factors is one of the main issues which concern manufacturing enterprises. In make-to-order or in perishable good production systems, the gap between expected and real output quality increases product cost mainly in two different ways: through the costs of extra production or reworks due to the presence of non-compliant items and through the costs originating from inefficient planning and the need of unscheduled machine changeovers. While the first are relatively easy to compute, even ex-ante, the latter are much more difficult to estimate because they depend on several planning variables such as lot size, sequencing, deliveries due dates, etc. This paper specifically addresses this problem in a make-to-order multi-product customized production system; here, the enterprise diversifies each production lot due to the fact that each order is based on the customer specific requirements and it is unique (in example, packaging or textiles and apparel industry). In these contexts, using a rule-of-thumb in overestimating the input size may cause high costs because all the excess production will generate little or no revenues on top of contributing to increasing wastes in general. On the other hand, the underestimation of the lots size is associated to the eventual need of launching a new, typically very small production order, thus a single product will bear twice the changeover costs. With little markups, it may happen that these extra costs can reduce profit to zero. Aim of this paper is to provide a critical analysis of the literature state-of-art while introducing some elements that can help the definition of lot-sizing policies considering poor quality costs

    Identifying Diabetic Patients: A Data Mining Approach

    Get PDF
    Mounting amounts of data made traditional data analysis methods impractical. Data mining (DM) tools provide a useful for alternative framework that addresses this problem. This study follows a DM technique to identify diabetic patients. We develop a model that clusters diabetes patients of a large healthcare company into different subpopulation. Consequently, we show the value of applying a DM model to identify diabetic patients

    Unified Concept of Bottleneck

    Get PDF
    The term `bottleneck` has been extensively used in operations management literature. Management paradigms like the Theory of Constraints focus on the identification and exploitation of bottlenecks. Yet, we show that the term has not been rigorously defined. We provide a classification of bottleneck definitions available in literature and discuss several myths associated with the concept of bottleneck. The apparent diversity of definitions raises the question whether it is possible to have a single bottleneck definition which has as much applicability in high variety job shops as in mass production environments. The key to the formulation of an unified concept of bottleneck lies in relating the concept of bottleneck to the concept of shadow price of resources. We propose an universally applicable bottleneck definition based on the concept of average shadow price. We discuss the procedure for determination of bottleneck values for diverse production environments. The Law of Diminishing Returns is shown to be a sufficient but not necessary condition for the equivalence of the average and the marginal shadow price. The equivalence of these two prices is proved for several environments. Bottleneck identification is the first step in resource acquisition decisions faced by managers. The definition of bottleneck presented in the paper has the potential to not only reduce ambiguity regarding the meaning of the term but also open a new window to the formulation and analysis of a rich set of problems faced by managers.

    DATA MINING CLUSTERING: A HEALTHCARE APPLICATION

    Get PDF

    How Yield Process Misspecification Affects the Solution of Disassemble-to-order Problems

    Get PDF
    Random yields from production are often present in manufacturing systems and there are several ways that this can be modeled. In disassembly planning, the yield uncertainty in harvesting parts from cores can be modeled as either stochastically proportional or binomial, two of these alternatives. A statistical analysis of data from engine remanufacturing of a major car producer fails to provide conclusive evidence on which kind of yield randomness might prevail. In order to gain insight into the importance of this yield assumption, the impact of possible yield misspecification on the solution of the disassemble-to-order problem is investigated. Our results show that the penalty for misspecifying the yield method can be substantial, and provide insight on when the penalty would likely be problematic. The results also indicate that in the absence of conclusive information on which alternative should be chosen, presuming binomial yields generally leads to lower cost penalties and therefore preferable results

    A Data Mining Approach To identify Diabetes

    Get PDF
    Mounting amounts of data made traditional data analysis methods impractical. Data mining (DM) tools provide a useful for alternative framework that addresses this problem. This study follows a DM technique to identify diabetic patients. We develop a model that clusters diabetes patients of a large healthcare company into different subpopulation. Consequently, we show the value of applying a DM model to identify diabetic patients
    corecore