1,052 research outputs found

    Balancing the Tradeoff between Profit and Fairness in Rideshare Platforms During High-Demand Hours

    No full text
    Rideshare platforms, when assigning requests to drivers, tend to maximize profit for the system and/or minimize waiting time for riders. Such platforms can exacerbate biases that drivers may have over certain types of requests. We consider the case of peak hours when the demand for rides is more than the supply of drivers. Drivers are well aware of their advantage during the peak hours and can choose to be selective about which rides to accept. Moreover, if in such a scenario, the assignment of requests to drivers (by the platform) is made only to maximize profit and/or minimize wait time for riders, requests of a certain type (e.g. from a non-popular pickup location, or to a non-popular drop-off location) might never be assigned to a driver. Such a system can be highly unfair to riders. However, increasing fairness might come at a cost of the overall profit made by the rideshare platform. To balance these conflicting goals, we present a flexible, non-adaptive algorithm, \lpalg, that allows the platform designer to control the profit and fairness of the system via parameters α\alpha and β\beta respectively. We model the matching problem as an online bipartite matching where the set of drivers is offline and requests arrive online. Upon the arrival of a request, we use \lpalg to assign it to a driver (the driver might then choose to accept or reject it) or reject the request. We formalize the measures of profit and fairness in our setting and show that by using \lpalg, the competitive ratios for profit and fairness measures would be no worse than α/e\alpha/e and β/e\beta/e respectively. Extensive experimental results on both real-world and synthetic datasets confirm the validity of our theoretical lower bounds. Additionally, they show that \lpalg under some choice of (α,β)(\alpha, \beta) can beat two natural heuristics, Greedy and Uniform, on \emph{both} fairness and profit

    Half-integrality, LP-branching and FPT Algorithms

    Full text link
    A recent trend in parameterized algorithms is the application of polytope tools (specifically, LP-branching) to FPT algorithms (e.g., Cygan et al., 2011; Narayanaswamy et al., 2012). However, although interesting results have been achieved, the methods require the underlying polytope to have very restrictive properties (half-integrality and persistence), which are known only for few problems (essentially Vertex Cover (Nemhauser and Trotter, 1975) and Node Multiway Cut (Garg et al., 1994)). Taking a slightly different approach, we view half-integrality as a \emph{discrete} relaxation of a problem, e.g., a relaxation of the search space from {0,1}V\{0,1\}^V to {0,1/2,1}V\{0,1/2,1\}^V such that the new problem admits a polynomial-time exact solution. Using tools from CSP (in particular Thapper and \v{Z}ivn\'y, 2012) to study the existence of such relaxations, we provide a much broader class of half-integral polytopes with the required properties, unifying and extending previously known cases. In addition to the insight into problems with half-integral relaxations, our results yield a range of new and improved FPT algorithms, including an O∗(∣Σ∣2k)O^*(|\Sigma|^{2k})-time algorithm for node-deletion Unique Label Cover with label set Σ\Sigma and an O∗(4k)O^*(4^k)-time algorithm for Group Feedback Vertex Set, including the setting where the group is only given by oracle access. All these significantly improve on previous results. The latter result also implies the first single-exponential time FPT algorithm for Subset Feedback Vertex Set, answering an open question of Cygan et al. (2012). Additionally, we propose a network flow-based approach to solve some cases of the relaxation problem. This gives the first linear-time FPT algorithm to edge-deletion Unique Label Cover.Comment: Added results on linear-time FPT algorithms (not present in SODA paper

    Quantum symmetry, the cosmological constant and Planck scale phenomenology

    Full text link
    We present a simple algebraic argument for the conclusion that the low energy limit of a quantum theory of gravity must be a theory invariant, not under the Poincare group, but under a deformation of it parameterized by a dimensional parameter proportional to the Planck mass. Such deformations, called kappa-Poincare algebras, imply modified energy-momentum relations of a type that may be observable in near future experiments. Our argument applies in both 2+1 and 3+1 dimensions and assumes only 1) that the low energy limit of a quantum theory of gravity must involve also a limit in which the cosmological constant is taken very small with respect to the Planck scale and 2) that in 3+1 dimensions the physical energy and momenta of physical elementary particles is related to symmetries of the full quantum gravity theory by appropriate renormalization depending on Lambda l^2_{Planck}. The argument makes use of the fact that the cosmological constant results in the symmetry algebra of quantum gravity being quantum deformed, as a consequence when the limit \Lambda l^2_{Planck} -> 0 is taken one finds a deformed Poincare invariance. We are also able to isolate what information must be provided by the quantum theory in order to determine which presentation of the kappa-Poincare algebra is relevant for the physical symmetry generators and, hence, the exact form of the modified energy-momentum relations. These arguments imply that Lorentz invariance is modified as in proposals for doubly special relativity, rather than broken, in theories of quantum gravity, so long as those theories behave smoothly in the limit the cosmological constant is taken to be small.Comment: LaTex, 19 page

    Data Reductions and Combinatorial Bounds for Improved Approximation Algorithms

    Full text link
    Kernelization algorithms in the context of Parameterized Complexity are often based on a combination of reduction rules and combinatorial insights. We will expose in this paper a similar strategy for obtaining polynomial-time approximation algorithms. Our method features the use of approximation-preserving reductions, akin to the notion of parameterized reductions. We exemplify this method to obtain the currently best approximation algorithms for \textsc{Harmless Set}, \textsc{Differential} and \textsc{Multiple Nonblocker}, all of them can be considered in the context of securing networks or information propagation

    Improved Approximation Algorithms for Stochastic Matching

    Full text link
    In this paper we consider the Stochastic Matching problem, which is motivated by applications in kidney exchange and online dating. We are given an undirected graph in which every edge is assigned a probability of existence and a positive profit, and each node is assigned a positive integer called timeout. We know whether an edge exists or not only after probing it. On this random graph we are executing a process, which one-by-one probes the edges and gradually constructs a matching. The process is constrained in two ways: once an edge is taken it cannot be removed from the matching, and the timeout of node vv upper-bounds the number of edges incident to vv that can be probed. The goal is to maximize the expected profit of the constructed matching. For this problem Bansal et al. (Algorithmica 2012) provided a 33-approximation algorithm for bipartite graphs, and a 44-approximation for general graphs. In this work we improve the approximation factors to 2.8452.845 and 3.7093.709, respectively. We also consider an online version of the bipartite case, where one side of the partition arrives node by node, and each time a node bb arrives we have to decide which edges incident to bb we want to probe, and in which order. Here we present a 4.074.07-approximation, improving on the 7.927.92-approximation of Bansal et al. The main technical ingredient in our result is a novel way of probing edges according to a random but non-uniform permutation. Patching this method with an algorithm that works best for large probability edges (plus some additional ideas) leads to our improved approximation factors
    • …
    corecore