777,556 research outputs found

    Are quasi-Monte Carlo algorithms efficient for two-stage stochastic programs?

    Get PDF
    Quasi-Monte Carlo algorithms are studied for designing discrete approximations of two-stage linear stochastic programs with random right-hand side and continuous probability distribution. The latter should allow for a transformation to a distribution with independent marginals. The two-stage integrands are piecewise linear, but neither smooth nor lie in the function spaces considered for QMC error analysis. We show that under some weak geometric condition on the two-stage model all terms of their ANOVA decomposition, except the one of highest order, are continuously differentiable and that first and second order ANOVA terms have mixed first order partial derivatives. Hence, randomly shifted lattice rules (SLR) may achieve the optimal rate of convergence not depending on the dimension if the effective superposition dimension is at most two. We discuss effective dimensions and dimension reduction for two-stage integrands. The geometric condition is shown to be satisfied almost everywhere if the underlying probability distribution is normal and principal component analysis (PCA) is used for transforming the covariance matrix. Numerical experiments for a large scale two-stage stochastic production planning model with normal demand show that indeed convergence rates close to the optimal are achieved when using SLR and randomly scrambled Sobol' point sets accompanied with PCA for dimension reduction

    Two stages optimization model on make or buy analysis and quality improvement considering learning and forgetting curve

    Get PDF
    Purpose: The aim of this research is to develop a two stages optimization model on make or buy analysis and quality improvement considering learning and forgetting curve. The first stage model is developed to determine the optimal selection of process/suppliers and the component allocation to those corresponding process/suppliers. The second stage model deals with quality improvement efforts to determine the optimal investment to maximize Return on Investment (ROI) by taking into consideration the learning and forgetting curve. Design/methodology/approach: The research used system modeling approach by mathematically modeling the system consists of a manufacturer with multi suppliers where the manufacturer tries to determine the best combination of their own processes and suppliers to minimize certain costs and provides funding for quality improvement efforts for their own processes and suppliers sides. Findings: This research provides better decisions in make or buy analysis and to improve the components by quality investment considering learning and forgetting curve. Research limitations/implications: This research has limitations concerning investment fund that assumed to be provided by the manufacturer which in the real system the fund may be provided by the suppliers. In this model we also does not differentiate two types of learning, namely autonomous and induced learning. Practical implications: This model can be used by a manufacturer to gain deeper insight in making decisions concerning process/suppliers selection along with component allocation and how to improve the component by investment allocation to maximize ROI. Originality/value: This paper combines two models, which in previous research the models are discussed separately. The inclusions of learning and forgetting also gives a new perspective in quality investment decision.Peer Reviewe

    Optimal Clustering of Discrete Mixtures: Binomial, Poisson, Block Models, and Multi-layer Networks

    Full text link
    In this paper, we first study the fundamental limit of clustering networks when a multi-layer network is present. Under the mixture multi-layer stochastic block model (MMSBM), we show that the minimax optimal network clustering error rate, which takes an exponential form and is characterized by the Renyi divergence between the edge probability distributions of the component networks. We propose a novel two-stage network clustering method including a tensor-based initialization algorithm involving both node and sample splitting and a refinement procedure by likelihood-based Lloyd algorithm. Network clustering must be accompanied by node community detection. Our proposed algorithm achieves the minimax optimal network clustering error rate and allows extreme network sparsity under MMSBM. Numerical simulations and real data experiments both validate that our method outperforms existing methods. Oftentimes, the edges of networks carry count-type weights. We then extend our methodology and analysis framework to study the minimax optimal clustering error rate for mixture of discrete distributions including Binomial, Poisson, and multi-layer Poisson networks. The minimax optimal clustering error rates in these discrete mixtures all take the same exponential form characterized by the Renyi divergences. These optimal clustering error rates in discrete mixtures can also be achieved by our proposed two-stage clustering algorithm

    Predicting the Level of Respiratory Support in COVID-19 Patients Using Machine Learning

    Get PDF
    In this paper, a machine learning-based system for the prediction of the required level of respiratory support in COVID-19 patients is proposed. The level of respiratory support is divided into three classes: class 0 which refers to minimal support, class 1 which refers to non-invasive support, and class 2 which refers to invasive support. A two-stage classification system is built. First, the classification between class 0 and others is performed. Then, the classification between class 1 and class 2 is performed. The system is built using a dataset collected retrospectively from 3491 patients admitted to tertiary care hospitals at the University of Louisville Medical Center. The use of the feature selection method based on analysis of variance is demonstrated in the paper. Furthermore, a dimensionality reduction method called principal component analysis is used. XGBoost classifier achieves the best classification accuracy (84%) in the first stage. It also achieved optimal performance in the second stage, with a classification accuracy of 83%

    A Novel Two-Stage Spectrum-Based Approach for Dimensionality Reduction: A Case Study on the Recognition of Handwritten Numerals

    Get PDF
    Dimensionality reduction (feature selection) is an important step in pattern recognition systems. Although there are different conventional approaches for feature selection, such as Principal Component Analysis, Random Projection, and Linear Discriminant Analysis, selecting optimal, effective, and robust features is usually a difficult task. In this paper, a new two-stage approach for dimensionality reduction is proposed. This method is based on one-dimensional and two-dimensional spectrum diagrams of standard deviation and minimum to maximum distributions for initial feature vector elements. The proposed algorithm is validated in an OCR application, by using two big standard benchmark handwritten OCR datasets, MNIST and Hoda. In the beginning, a 133-element feature vector was selected from the most used features, proposed in the literature. Finally, the size of initial feature vector was reduced from 100% to 59.40% (79 elements) for the MNIST dataset, and to 43.61% (58 elements) for the Hoda dataset, in order. Meanwhile, the accuracies of OCR systems are enhanced 2.95% for the MNIST dataset, and 4.71% for the Hoda dataset. The achieved results show an improvement in the precision of the system in comparison to the rival approaches, Principal Component Analysis and Random Projection. The proposed technique can also be useful for generating decision rules in a pattern recognition system using rule-based classifiers

    Adaptive Two-stage Stochastic Programming with an Application to Capacity Expansion Planning

    Full text link
    Multi-stage stochastic programming is a well-established framework for sequential decision making under uncertainty by seeking policies that are fully adapted to the uncertainty. Often such flexible policies are not desirable, and the decision maker may need to commit to a set of actions for a number of planning periods. Two-stage stochastic programming might be better suited to such settings, where the decisions for all periods are made here-and-now and do not adapt to the uncertainty realized. In this paper, we propose a novel alternative approach, where the stages are not predetermined but part of the optimization problem. Each component of the decision policy has an associated revision point, a period prior to which the decision is predetermined and after which it is revised to adjust to the uncertainty realized thus far. We motivate this setting using the multi-period newsvendor problem by deriving an optimal adaptive policy. We label the proposed approach as adaptive two-stage stochastic programming and provide a generic mixed-integer programming formulation for finite stochastic processes. We show that adaptive two-stage stochastic programming is NP-hard in general. Next, we derive bounds on the value of adaptive two-stage programming in comparison to the two-stage and multi-stage approaches for a specific problem structure inspired by the capacity expansion planning problem. Since directly solving the mixed-integer linear program associated with the adaptive two-stage approach might be very costly for large instances, we propose several heuristic solution algorithms based on the bound analysis. We provide approximation guarantees for these heuristics. Finally, we present an extensive computational study on an electricity generation capacity expansion planning problem and demonstrate the computational and practical impacts of the proposed approach from various perspectives

    Birnbaum Importance Patterns and Their Applications in the Component Assignment Problem

    Get PDF
    The Birnbaum importance (BI) is a well-known measure that evaluates the relative contribution of components to system reliability. It has been successfully applied to tackling some reliability problems. This dissertation investigates two topics related to the BI in the reliability field: the patterns of component BIs and the BI-based heuristics and meta-heuristics for solving the component assignment problem (CAP).There exist certain patterns of component BIs (i.e., the relative order of the BI values to the individual components) for linear consecutive-k-out-of-n (Lin/Con/k/n) systems when all components have the same reliability p. This study summarizes and annotates the existing BI patterns for Lin/Con/k/n systems, proves new BI patterns conditioned on the value of p, disproves some patterns that were conjectured or claimed in the literature, and makes new conjectures based on comprehensive computational tests and analysis. More importantly, this study defines a concept of segment in Lin/Con/k/n systems for analyzing the BI patterns, and investigates the relationship between the BI and the common component reliability p and the relationship between the BI and the system size n. One can then use these relationships to further understand the proved, disproved, and conjectured BI patterns.The CAP is to find the optimal assignment of n available components to n positions in a system such that the system reliability is maximized. The ordering of component BIs has been successfully used to design heuristics for the CAP. This study proposes five new BI-based heuristics and discusses their corresponding properties. Based on comprehensive numerical experiments, a BI-based two-stage approach (BITA) is proposed for solving the CAP with each stage using different BI-based heuristics. The two-stage approach is much more efficient and capable to generate solutions of higher quality than the GAMS/CoinBonmin solver and a randomization method.This dissertation then presents a meta-heuristic, i.e., a BI-based genetic local search (BIGLS) algorithm, for the CAP in which a BI-based local search is embedded into the genetic algorithm. Comprehensive numerical experiments show the robustness and effectiveness of the BIGLS algorithm and especially its advantages over the BITA in terms of solution quality
    • …
    corecore