14 research outputs found

    The Power of Static Pricing for Reusable Resources

    Full text link
    We consider the problem of pricing a reusable resource service system. Potential customers arrive according to a Poisson process and purchase the service if their valuation exceeds the current price. If no units are available, customers immediately leave without service. Serving a customer corresponds to using one unit of the reusable resource, where the service time has an exponential distribution. The objective is to maximize the steady-state revenue rate. This system is equivalent to the classical Erlang loss model with price-sensitive customers, which has applications in vehicle sharing, cloud computing, and spare parts management. Although an optimal pricing policy is dynamic, we provide two main results that show a simple static policy is universally near-optimal for any service rate, arrival rate, and number of units in the system. When there is one class of customers who have a monotone hazard rate (MHR) valuation distribution, we prove that a static pricing policy guarantees 90.4\% of the revenue from the optimal dynamic policy. When there are multiple classes of customers that each have their own regular valuation distribution and service rate, we prove that static pricing guarantees 78.9\% of the revenue of the optimal dynamic policy. In this case, the optimal pricing policy is exponentially large in the number of classes while the static policy requires only one price per class. Moreover, we prove that the optimal static policy can be easily computed, resulting in the first polynomial time approximation algorithm for this problem

    From Cost Sharing Mechanisms to Online Selection Problems

    Get PDF
    We consider a general class of online optimization problems, called online selection problems, where customers arrive sequentially, and one has to decide upon arrival whether to accept or reject each customer. If a customer is rejected, then a rejection cost is incurred. The accepted customers are served with minimum possible cost, either online or after all customers have arrived. The goal is to minimize the total production costs for the accepted customers plus the rejection costs for the rejected customers. These selection problems are related to online variants of offline prize collecting combinatorial optimization problems that have been widely studied in the computer science literature. In this paper, we provide a general framework to develop online algorithms for this class of selection problems. In essence, the algorithmic framework leverages any cost sharing mechanism with certain properties into a poly-logarithmic competitive online algorithm for the respective problem; the competitive ratios are shown to be near-optimal. We believe that the general and transparent connection we establish between cost sharing mechanisms and online algorithms could lead to additional online algorithms for problems beyond the ones studied in this paper.National Science Foundation (U.S.) (CAREER Award CMMI-0846554)United States. Air Force Office of Scientific Research (FA9550-11-1-0150)United States. Air Force Office of Scientific Research (FA9550-08-1-0369)Solomon Buchsbaum AT&T Research Fun

    Generalization Bounds in the Predict-then-Optimize Framework

    Full text link
    The predict-then-optimize framework is fundamental in many practical settings: predict the unknown parameters of an optimization problem, and then solve the problem using the predicted values of the parameters. A natural loss function in this environment is to consider the cost of the decisions induced by the predicted parameters, in contrast to the prediction error of the parameters. This loss function was recently introduced in Elmachtoub and Grigas (2017) and referred to as the Smart Predict-then-Optimize (SPO) loss. In this work, we seek to provide bounds on how well the performance of a prediction model fit on training data generalizes out-of-sample, in the context of the SPO loss. Since the SPO loss is non-convex and non-Lipschitz, standard results for deriving generalization bounds do not apply. We first derive bounds based on the Natarajan dimension that, in the case of a polyhedral feasible region, scale at most logarithmically in the number of extreme points, but, in the case of a general convex feasible region, have linear dependence on the decision dimension. By exploiting the structure of the SPO loss function and a key property of the feasible region, which we denote as the strength property, we can dramatically improve the dependence on the decision and feature dimensions. Our approach and analysis rely on placing a margin around problematic predictions that do not yield unique optimal solutions, and then providing generalization bounds in the context of a modified margin SPO loss function that is Lipschitz continuous. Finally, we characterize the strength property and show that the modified SPO loss can be computed efficiently for both strongly convex bodies and polytopes with an explicit extreme point representation.Comment: Preliminary version in NeurIPS 201

    Estimate-Then-Optimize Versus Integrated-Estimation-Optimization: A Stochastic Dominance Perspective

    Full text link
    In data-driven stochastic optimization, model parameters of the underlying distribution need to be estimated from data in addition to the optimization task. Recent literature suggests the integration of the estimation and optimization processes, by selecting model parameters that lead to the best empirical objective performance. Such an integrated approach can be readily shown to outperform simple ``estimate then optimize" when the model is misspecified. In this paper, we argue that when the model class is rich enough to cover the ground truth, the performance ordering between the two approaches is reversed for nonlinear problems in a strong sense. Simple ``estimate then optimize" outperforms the integrated approach in terms of stochastic dominance of the asymptotic optimality gap, i,e, the mean, all other moments, and the entire asymptotic distribution of the optimality gap is always better. Analogous results also hold under constrained settings and when contextual features are available. We also provide experimental findings to support our theory

    Market Segmentation Trees

    Full text link
    We seek to provide an interpretable framework for segmenting users in a population for personalized decision-making. The standard approach is to perform market segmentation by clustering users according to similarities in their contextual features, after which a "response model" is fit to each segment to model how users respond to personalized decisions. However, this methodology is not ideal for personalization, since two users could in theory have similar features but different response behaviors. We propose a general methodology, Market Segmentation Trees (MSTs), for learning interpretable market segmentations explicitly driven by identifying differences in user response patterns. To demonstrate the versatility of our methodology, we design two new, specialized MST algorithms: (i) Choice Model Trees (CMTs) which can be used to predict a user's choice amongst multiple options, and (ii) Isotonic Regression Trees (IRTs) which can be used to solve the bid landscape forecasting problem. We provide a customizable, open-source code base for training MSTs in Python which employs several strategies for scalability, including parallel processing and warm starts. We provide a theoretical analysis of the asymptotic running time of our training method validating its computational tractability on large datasets. We assess the practical performance of MSTs on several synthetic and real world datasets, showing our method reliably finds market segmentations which accurately model response behavior. Further, when applying MSTs to historical bidding data from a leading demand-side platform (DSP), we show that MSTs consistently achieve a 5-29% improvement in bid landscape forecasting accuracy over the DSP's current model. Our findings indicate that integrating market segmentation with response modeling consistently leads to improvements in response prediction accuracy, thereby aiding personalization
    corecore