734 research outputs found
Revealing optimal thresholds for generalized secretary problem via continuous LP: impacts on online K-item auction and bipartite K-matching with random arrival order
We consider the general (J, K)-secretary problem, where n totally ordered items arrive in a random order. An algorithm observes the relative merits of arriving items and is allowed to make J selections. The objective is to maximize the expected number of items selected among the K best items. Buchbinder, Jain and Singh proposed a finite linear program (LP) that completely characterizes the problem, but it is difficult to analyze the asymptotic behavior of its optimal solution as n tends to infinity. Instead, we prove a formal connection between the finite model and an infinite model, where there are a countably infinite number of items, each of which has arrival time drawn independently and uniformly from [0, 1]. The finite LP extends to a continuous LP, whose complementary slackness conditions reveal an optimal algorithm which involves JK thresholds that play a similar role as the 1/e-threshold in the optimal classical secretary algorithm. In particular, for the case K=1, the J optimal thresholds have a nice 'rational description'. Our continuous LP analysis gives a very clear perspective on the problem, and the new insights inspire us; to solve two related problems. 1. We settle the open problem whether algorithms based only on relative merits can achieve optimal ratio for matroid secretary problems. We show that, for online 2-item auction with random arriving bids (the if-uniform matroid problem with K=2), an algorithm making decisions based only on relative merits cannot achieve the optimal ratio. This is in contrast with the folklore that, for online 1-item auction, no algorithm can have performance ratio strictly larger than 1/e, which is achievable by an algorithm that considers only relative merits. 2. We give a general transformation technique that takes any monotone algorithm (such as thresholdalgorithms) for the (K, K)-secretary problem, and constructs an algorithm for online bipartite K-matching with random arrival order that has at least the same performance guarantee.postprin
Solving Multi-choice Secretary Problem in Parallel: An Optimal Observation-Selection Protocol
The classical secretary problem investigates the question of how to hire the
best secretary from candidates who come in a uniformly random order. In
this work we investigate a parallel generalizations of this problem introduced
by Feldman and Tennenholtz [14]. We call it shared -queue -choice
-best secretary problem. In this problem, candidates are evenly
distributed into queues, and instead of hiring the best one, the employer
wants to hire candidates among the best persons. The quotas are
shared by all queues. This problem is a generalized version of -choice
-best problem which has been extensively studied and it has more practical
value as it characterizes the parallel situation.
Although a few of works have been done about this generalization, to the best
of our knowledge, no optimal deterministic protocol was known with general
queues. In this paper, we provide an optimal deterministic protocol for this
problem. The protocol is in the same style of the -solution for the
classical secretary problem, but with multiple phases and adaptive criteria.
Our protocol is very simple and efficient, and we show that several
generalizations, such as the fractional -choice -best secretary problem
and exclusive -queue -choice -best secretary problem, can be solved
optimally by this protocol with slight modification and the latter one solves
an open problem of Feldman and Tennenholtz [14].
In addition, we provide theoretical analysis for two typical cases, including
the 1-queue 1-choice -best problem and the shared 2-queue 2-choice 2-best
problem. For the former, we prove a lower bound of
the competitive ratio. For the latter, we show the optimal competitive ratio is
while previously the best known result is 0.356 [14].Comment: This work is accepted by ISAAC 201
Effective and Efficient Reconstruction Schemes for the Inverse Medium Problem in Scattering
This thesis challenges with the development of a computational framework facilitating the solution for the inverse medium problem in time-independent scattering in two- and three-dimensional setting. This includes three main application cases: the simulation of the scattered field for a given transmitter-receiver geometry; the generation of simulated data as well as the handling of real-world data; the reconstruction of the refractive index of a penetrable medium from several measured, scattered fields. We focus on an effective and efficient reconstruction algorithm. Therefore we set up a variational reconstruction scheme. The underlying paradigm is to minimize the discrepancy between the predicted data based on the reconstructed refractive index and the given data while taking into account various structural a priori information via suitable penalty terms, which are designed to promote information expected in real-world environments. Finally, the scheme relies on a primal-dual algorithm. In addition, information about the obstacle's shape and position obtained by the factorization method can be used as a priori information to increase the overall effectiveness of the scheme. An implementation is provided as MATLAB toolbox IPscatt. It is tailored to the needs of practitioners, e.g. a heuristic algorithm for an automatic, data-driven choice of the regularization parameters is available. The effectiveness and efficiency of the proposed approach are demonstrated for simulated as well as real-world data by comparisons with existing software packages
Heuristically Driven Search Methods for Topology Control in Directional Wireless Hybrid Network
Information and Networked Communications play a vital role in the everyday operations of the United States Armed Forces. This research establishes a comparative analysis of the unique network characteristics and requirements introduced by the Topology Control Problem (also known as the Network Design Problem). Previous research has focused on the development of Mixed-Integer Linear Program (MILP) formulations, simple heuristics, and Genetic Algorithm (GA) strategies for solving this problem. Principal concerns with these techniques include runtime and solution quality. To reduce runtime, new strategies have been developed based on the concept of flow networks using the novel combination of three well-known algorithms; knapsack, greedy commodity filtering, and maximum flow. The performance of this approach and variants are compared with previous research using several network metrics including computation time, cost, network diameter, dropped commodities, and average number of hops per commodity. The results conclude that maximum flow algorithms alone are not quite as effective as previous findings, but are at least comparable and show potential for larger networks
Mitigating Uncertainty via Compromise Decisions in Two-stage Stochastic Linear Programming
Stochastic Programming (SP) has long been considered as a well-justified yet computationally challenging paradigm for practical applications. Computational studies in the literature often involve approximating a large number of scenarios by using a small number of scenarios to be processed via deterministic solvers, or running Sample Average Approximation on some genre of high performance machines so that statistically acceptable bounds can be obtained. In this paper we show that for a class of stochastic linear programming problems, an alternative approach known as Stochastic Decomposition (SD) can provide solutions of similar quality, in far less computational time using ordinary desktop or laptop machines of today. In addition to these compelling computational results, we also provide a stronger convergence result for SD, and introduce a new solution concept which we refer to as the compromise decision. This new concept is attractive for algorithms which call for multiple replications in sampling-based convex optimization algorithms. For such replicated optimization, we show that the difference between an average solution and a compromise decision provides a natural stopping rule. Finally our computational results cover a variety of instances from the literature, including a detailed study of SSN, a network planning instance which is known to be more challenging than other test instances in the literature
Proportionally Fair Online Allocation of Public Goods with Predictions
We design online algorithms for the fair allocation of public goods to a set
of agents over a sequence of rounds and focus on improving their
performance using predictions. In the basic model, a public good arrives in
each round, the algorithm learns every agent's value for the good, and must
irrevocably decide the amount of investment in the good without exceeding a
total budget of across all rounds. The algorithm can utilize (potentially
inaccurate) predictions of each agent's total value for all the goods to
arrive. We measure the performance of the algorithm using a proportional
fairness objective, which informally demands that every group of agents be
rewarded in proportion to its size and the cohesiveness of its preferences.
In the special case of binary agent preferences and a unit budget, we show
that proportional fairness can be achieved without using any
predictions, and that this is optimal even if perfectly accurate predictions
were available. However, for general preferences and budget no algorithm can
achieve better than proportional fairness without predictions. We
show that algorithms with (reasonably accurate) predictions can do much better,
achieving proportional fairness. We also extend this
result to a general model in which a batch of public goods arrive in each
round and achieve proportional fairness. Our
exact bounds are parametrized as a function of the error in the predictions and
the performance degrades gracefully with increasing errors
- âŠ