42 research outputs found
Recommended from our members
On the Trade-offs between Modeling Power and Algorithmic Complexity
Mathematical modeling is a central component of operations research. Most of the academic research in our field focuses on developing algorithmic tools for solving various mathematical problems arising from our models. However, our procedure for selecting the best model to use in any particular application is ad hoc. This dissertation seeks to rigorously quantify the trade-offs between various design criteria in model construction through a series of case studies. The hope is that a better understanding of the pros and cons of different models (for the same application) can guide and improve the model selection process.
In this dissertation, we focus on two broad types of trade-offs. The first type arises naturally in mechanism or market design, a discipline that focuses on developing optimization models for complex multi-agent systems. Such systems may require satisfying multiple objectives that are potentially in conflict with one another. Hence, finding a solution that simultaneously satisfies several design requirements is challenging. The second type addresses the dynamics between model complexity and computational tractability in the context of approximation algorithms for some discrete optimization problems. The need to study this type of trade-offs is motivated by certain industry problems where the goal is to obtain the best solution within a reasonable time frame. Hence, being able to quantify and compare the degree of sub-optimality of the solution obtained under different models is helpful. Chapters 2-5 of the dissertation focus on trade-offs of the first type and Chapters 6-7 the second type
Recommended from our members
Incremental Packing Problems: Algorithms and Polyhedra
In this thesis, we propose and study discrete, multi-period extensions of classical packing problems, a fundamental class of models in combinatorial optimization. Those extensions fall under the general name of incremental packing problems. In such models, we are given an added time component and different capacity constraints for each time. Over time, capacities are weakly increasing as resources increase, allowing more items to be selected. Once an item is selected, it cannot be removed in future times. The goal is to maximize some (possibly also time-dependent) objective function under such packing constraints.
In Chapter 2, we study the generalized incremental knapsack problem, a multi-period extension to the classical knapsack problem. We present a policy that reduces the generalized incremental knapsack problem to sequentially solving multiple classical knapsack problems, for which many efficient algorithms are known. We call such an algorithm a single-time algorithm. We prove that this algorithm gives a (0.17 - ⋲)-approximation for the generalized incremental knapsack problem. Moreover, we show that the algorithm is very efficient in practice. On randomly generated instances of the generalized incremental knapsack problem, it returns near optimal solutions and runs much faster compared to Gurobi solving the problem using the standard integer programming formulation.
In Chapter 3, we present additional approximation algorithms for the generalized incremental knapsack problem. We first give a polynomial-time (½-⋲)-approximation, improving upon the approximation ratio given in Chapter 2. This result is based on a new reformulation of the generalized incremental knapsack problem as a single-machine sequencing problem, which is addressed by blending dynamic programming techniques and the classical Shmoys-Tardos algorithm for the generalized assignment problem. Using the same sequencing reformulation, combined with further enumeration-based self-reinforcing ideas and new structural properties of nearly-optimal solutions, we give a quasi-polynomial time approximation scheme for the problem, thus ruling out the possibility that the generalized incremental knapsack problem is APX-hard under widely-believed complexity assumptions.
In Chapter 4, we first turn our attention to the submodular monotone all-or-nothing incremental knapsack problem (IK-AoN), a special case of the submodular monotone function subject to a knapsack constraint extended to a multi-period setting. We show that each instance of IK-AoN can be reduced to a linear version of the problem. In particular, using a known PTAS for the linear version from literature as a subroutine, this implies that IK-AoN admits a PTAS. Next, we study special cases of the generalized incremental knapsack problem and provide improved approximation schemes for these special cases.
In Chapter 5, we give a polynomial-time (¼-⋲)-approximation in expectation for the incremental generalized assignment problem, a multi-period extension of the generalized assignment problem. To develop this result, similar to the reformulation from Chapter 3, we reformulate the incremental generalized assignment problem as a multi-machine sequencing problem. Following the reformulation, we show that the (½-⋲)-approximation for the generalized incremental knapsack problem, combined with further randomized rounding techniques, can be leveraged to give a constant factor approximation in expectation for the incremental generalized assignment problem.
In Chapter 6, we turn our attention to the incremental knapsack polytope. First, we extend one direction of Balas's characterization of 0/1-facets of the knapsack polytope to the incremental knapsack polytope. Starting from extended cover inequalities valid for the knapsack polytope, we show how to strengthen them to define facets for the incremental knapsack polytope. In particular, we prove that under the same conditions for which these inequalities define facets for the knapsack polytope, following our strengthening procedure, the resulting inequalities define facets for the incremental knapsack polytope. Then, as there are up to exponentially many such inequalities, we give separation algorithms for this class of inequalities
Optimization and Information Problems in Operations
The main purpose of this dissertation is to study the optimization problems and the value of information in various commercial settings, especially in the emerging platform economy.
Chapter 1, “Data-Driven Asset Selling”. Motivated by online asset selling marketplace business (e.g., used cars and real estate), we formulate a data-driven asset selling dynamic pricing framework which utilizes platforms’ access to customers’ online behavioral data. With mild assumptions on the demand model, careful characterization of the problem structure shows that the model admits some ideal properties that facilitate our regret analysis under our dynamic programming setting. Instead of studying the policy performance with a long horizon and large quantities of inventory, we study the asymptotic policy performance over a single unit of product as the demand rate grows. We propose a deterministic approximation policy (DA policy) and show that DA policy provides an upper bound for the original problem and its induced pricing policy achieves asymptotic optimality as the scale of the problem grows properly. Later we consider a dynamic pricing scenario where an idiosyncratic latent value for each asset is unknown. We propose a Thompson-Sampling-based and a MAP-based pricing and learning policy. Since the platform is restricted in an infrequent pricing environment, within each decision epoch, an adequate amount of customer online behavior records is available. Utilizing large-sample deviation properties, we are able to conduct regret analysis on the TS and MAP policies. Finally, we use numerical experiments to show that our proposed algorithms could improve the revenue performance significantly compared with an algorithm that is currently implemented by a leading used car platform. Besides, we find that using a simple deterministic proxy of demand forecast is mostly harmless, while accurate estimation of the idiosyncratic latent value can make significant differences. Simulations also reveal that in our problem setting, the exploration step in the TS policy may not help to outperform the MAP policy. This indicates that the effectiveness of exploration highly depends on the nature of the problem, which may be of independent interest.
Chapter 2, “Cash Hedging Motivates Information Sharing in Supply Chains”. Finance literature well documents that firms’ cash hedging strategies heavily depend on the market conditions. Unsurprisingly, such decisions could be challenging for an upstream firm in a supply chain where the end market conditions are not transparent to him. In this paper, we study the interplay between firms’ information sharing behaviors and cash hedging strategies in supply chains. First, we argue that the presence of a supplier’s cash hedging decision may motivate downstream retailers’ voluntary market information sharing with the supplier, since making the supplier more informed of the market conditions helps the retailer handle her risk in the wholesale price. This also forms a new reason why a supplier should consider hedging since the cash hedging decision itself can be used as a bargaining tool during the information sharing negotiation with his retailer. Then we find for homogeneous Cournot-competing retailers, asymmetric information-sharing outcomes could emerge as an equilibrium where publicly sharing information typically will not hurt, especially, sometimes it can achieve Pareto improvement of the supply chain and consumer welfare. Finally, when a single supplier serves multiple markets, the heterogeneity across market sizes and the correlation among market shocks play big roles in shaping the equilibrium. Especially in a simultaneous information-sharing game, greater market size heterogeneity and negatively correlated market shocks are more likely to result in the nonexistence of pure Nash equilibrium. When the Stackelberg sequence is introduced, greater market size heterogeneity and positively correlated market shocks are more likely to induce information sharing in the equilibrium. Furthermore, in the multi-market setting, the existence of an information-sharing channel may hurt retailers, the system as a whole, and consumer welfare.
Chapter 3, “Display Optimization under the Multinomial Logit Choice Model: Balancing Revenue and Customer Satisfaction”. In this paper, we consider an assortment optimization problem in which a platform must choose pairwise disjoint sets of assortments to offer across a series of T stages. Arriving customers begin their search process in the first stage and progress sequentially through the stages until their patience expires, at which point they make a multinomial-logit-based purchasing decision from among all products they have viewed throughout their search process. The goal is to choose the sequential displays of product offerings to maximize expected revenue. Additionally, we impose stage-specific constraints that ensure that as each customer progresses farther and farther through the T stages, there is a minimum level of “desirability” met by the collections of displayed products. We consider two related measures of desirability: purchase likelihood and expected utility derived from the offered assortments. In this way, the offered sequence of assortment must be both high earning and well-liked, which breaks from the traditional assortment setting, where customer considerations are generally not explicitly accounted for. We show that our assortment problem of interest is strongly NP-Hard, thus ruling out the existence of a fully polynomial-time approximation scheme (FPTAS). From an algorithmic standpoint, as a warm-up, we develop a simple constant factor approximation scheme in which we carefully stitch together myopically selected assortments for each stage. Our main algorithmic result consists of a polynomial-time approximation scheme (PTAS), which combines a handful of structural results related to the make-up of the optimal assortment sequence within an approximate dynamic programming framework
Recommended from our members
Fundamental Tradeoffs for Modeling Customer Preferences in Revenue Management
Revenue management (RM) is the science of selling the right product, to the right person, at the right price. A key to the success of RM, which now spans a broad array of industries, is its grounding in mathematical modeling and analytics. This dissertation contributes to the development of new RM tools by: (1) exploring some fundamental tradeoffs underlying any RM problems, and (2) designing efficient algorithms for some RM applications. Another underlying theme of this dissertation is the modeling of customer preferences, a key component of any RM problem.
The first chapters of this dissertation focus on the model selection problem: many demand models are available but picking the right model is a challenging task. In particular, we explore the tension between the richness of a model and its tractability. To quantify this tradeoff, we focus on the assortment optimization problem, a very general and core RM problem. To capture customer preferences in this context, we use choice models, a particular type of demand model. In Chapters 1, 2, 3 and 4 we design efficient algorithms for the assortment optimization problem under different choice models. By assessing the strengths and weaknesses of different choice models, we can quantify the cost in tractability one has to pay for better predictive power. This in turn leads to a better understanding of the tradeoffs underlying the model selection problem.
In Chapter 5, we focus on a different question underlying any RM problem: choos- ing how to sell a given product. We illustrate this tradeoff by focusing on the problem of selling ad impressions via Internet display advertising platforms. In particular, we study how the presence of risk-averse buyers affects the desire for reservation con- tracts over real time buy via a second-price auction. In order to capture the risk aversion of buyers, we study different utility models
Certificates and Witnesses for Probabilistic Model Checking
The ability to provide succinct information about why a property does, or does not, hold in a given system is a key feature in the context of formal verification and model checking.
It can be used both to explain the behavior of the system to a user of verification software, and as a tool to aid automated abstraction and synthesis procedures.
Counterexample traces, which are executions of the system that do not satisfy the desired specification, are a classical example.
Specifications of systems with probabilistic behavior usually require that an event happens with sufficiently high (or low) probability.
In general, single executions of the system are not enough to demonstrate that such a specification holds.
Rather, standard witnesses in this setting are sets of executions which in sum exceed the required probability bound.
In this thesis we consider methods to certify and witness that probabilistic reachability constraints hold in Markov decision processes (MDPs) and probabilistic timed automata (PTA).
Probabilistic reachability constraints are threshold conditions on the maximal or minimal probability of reaching a set of target-states in the system.
The threshold condition may represent an upper or lower bound and be strict or non-strict.
We show that the model-checking problem for each type of constraint can be formulated as a satisfiability problem of a system of linear inequalities.
These inequalities correspond closely to the probabilistic transition matrix of the MDP.
Solutions of the inequalities are called Farkas certificates for the corresponding property, as they can indeed be used to easily validate that the property holds.
By themselves, Farkas certificates do not explain why the corresponding probabilistic reachability constraint holds in the considered MDP.
To demonstrate that the maximal reachability probability in an MDP is above a certain threshold, a commonly used notion are witnessing subsystems.
A subsystem is a witness if the MDP satisfies the lower bound on the optimal reachability probability even if all states not included in the subsystem are made rejecting trap states.
Hence, a subsystem is a part of the MDP which by itself satisfies the lower-bounded threshold constraint on the optimal probability of reaching the target-states.
We consider witnessing subsystems for lower bounds on both the maximal and minimal reachability probabilities, and show that Farkas certificates and witnessing subsystems are related.
More precisely, the support (i.e., the indices with a non-zero entry) of a Farkas certificate induces the state-space of a witnessing subsystem for the corresponding property.
Vice versa, given a witnessing subsystem one can compute a Farkas certificate whose support corresponds to the state-space of the witness.
This insight yields novel algorithms and heuristics to compute small and minimal witnessing subsystems.
To compute minimal witnesses, we propose mixed-integer linear programming formulations whose solutions are Farkas certificates with minimal support.
We show that the corresponding decision problem is NP-complete even for acyclic Markov chains, which supports the use of integer programs to solve it.
As this approach does not scale well to large instances, we introduce the quotient-sum heuristic, which is based on iteratively solving a sequence of linear programs.
The solutions of these linear programs are also Farkas certificates.
In an experimental evaluation we show that the quotient-sum heuristic is competitive with state-of-the-art methods.
A large part of the algorithms proposed in this thesis are implemented in the tool SWITSS.
We study the complexity of computing minimal witnessing subsystems for probabilistic systems that are similar to trees or paths.
Formally, this is captured by the notions of tree width and path width.
Our main result here is that the problem of computing minimal witnessing subsystems remains NP-complete even for Markov chains with bounded path width.
The hardness proof identifies a new source of combinatorial hardness in the corresponding decision problem.
Probabilistic timed automata generalize MDPs by including a set of clocks whose values determine which transitions are enabled.
They are widely used to model and verify real-time systems.
Due to the continuously-valued clocks, their underlying state-space is inherently uncountable.
Hence, the methods that we describe for finite-state MDPs do not carry over directly to PTA.
Furthermore, a good notion of witness for PTA should also take into account timing aspects.
We define two kinds of subsystems for PTA, one for maximal and one for minimal reachability probabilities, respectively.
As for MDPs, a subsystem of a PTA is called a witness for a lower-bounded constraint on the (maximal or minimal) reachability probability, if it itself satisfies this constraint.
Then, we show that witnessing subsystems of PTA induce Farkas certificates in certain finite-state quotients of the PTA.
Vice versa, Farkas certificates of such a quotient induce witnesses of the PTA.
Again, the support of the Farkas certificates corresponds to the states included in the subsystem.
These insights are used to describe algorithms for the computation of minimal witnessing subsystems for PTA, with respect to three different notions of size.
One of them counts the number of locations in the subsystem, while the other two take into account the possible clock valuations in the subsystem.:1 Introduction
2 Preliminaries
3 Farkas certificates
4 New techniques for witnessing subsystems
5 Probabilistic systems with low tree width
6 Explications for probabilistic timed automata
7 Conclusio
LIPIcs, Volume 244, ESA 2022, Complete Volume
LIPIcs, Volume 244, ESA 2022, Complete Volum