46,524 research outputs found

    Optimality of Treating Interference as Noise: A Combinatorial Perspective

    Get PDF
    For single-antenna Gaussian interference channels, we re-formulate the problem of determining the Generalized Degrees of Freedom (GDoF) region achievable by treating interference as Gaussian noise (TIN) derived in [3] from a combinatorial perspective. We show that the TIN power control problem can be cast into an assignment problem, such that the globally optimal power allocation variables can be obtained by well-known polynomial time algorithms. Furthermore, the expression of the TIN-Achievable GDoF region (TINA region) can be substantially simplified with the aid of maximum weighted matchings. We also provide conditions under which the TINA region is a convex polytope that relax those in [3]. For these new conditions, together with a channel connectivity (i.e., interference topology) condition, we show TIN optimality for a new class of interference networks that is not included, nor includes, the class found in [3]. Building on the above insights, we consider the problem of joint link scheduling and power control in wireless networks, which has been widely studied as a basic physical layer mechanism for device-to-device (D2D) communications. Inspired by the relaxed TIN channel strength condition as well as the assignment-based power allocation, we propose a low-complexity GDoF-based distributed link scheduling and power control mechanism (ITLinQ+) that improves upon the ITLinQ scheme proposed in [4] and further improves over the heuristic approach known as FlashLinQ. It is demonstrated by simulation that ITLinQ+ provides significant average network throughput gains over both ITLinQ and FlashLinQ, and yet still maintains the same level of implementation complexity. More notably, the energy efficiency of the newly proposed ITLinQ+ is substantially larger than that of ITLinQ and FlashLinQ, which is desirable for D2D networks formed by battery-powered devices.Comment: A short version has been presented at IEEE International Symposium on Information Theory (ISIT 2015), Hong Kon

    Partial-Matching and Hausdorff RMS Distance Under Translation: Combinatorics and Algorithms

    Full text link
    We consider the RMS distance (sum of squared distances between pairs of points) under translation between two point sets in the plane, in two different setups. In the partial-matching setup, each point in the smaller set is matched to a distinct point in the bigger set. Although the problem is not known to be polynomial, we establish several structural properties of the underlying subdivision of the plane and derive improved bounds on its complexity. These results lead to the best known algorithm for finding a translation for which the partial-matching RMS distance between the point sets is minimized. In addition, we show how to compute a local minimum of the partial-matching RMS distance under translation, in polynomial time. In the Hausdorff setup, each point is paired to its nearest neighbor in the other set. We develop algorithms for finding a local minimum of the Hausdorff RMS distance in nearly linear time on the line, and in nearly quadratic time in the plane. These improve substantially the worst-case behavior of the popular ICP heuristics for solving this problem.Comment: 31 pages, 6 figure

    Cross-layer Congestion Control, Routing and Scheduling Design in Ad Hoc Wireless Networks

    Get PDF
    This paper considers jointly optimal design of crosslayer congestion control, routing and scheduling for ad hoc wireless networks. We first formulate the rate constraint and scheduling constraint using multicommodity flow variables, and formulate resource allocation in networks with fixed wireless channels (or single-rate wireless devices that can mask channel variations) as a utility maximization problem with these constraints. By dual decomposition, the resource allocation problem naturally decomposes into three subproblems: congestion control, routing and scheduling that interact through congestion price. The global convergence property of this algorithm is proved. We next extend the dual algorithm to handle networks with timevarying channels and adaptive multi-rate devices. The stability of the resulting system is established, and its performance is characterized with respect to an ideal reference system which has the best feasible rate region at link layer. We then generalize the aforementioned results to a general model of queueing network served by a set of interdependent parallel servers with time-varying service capabilities, which models many design problems in communication networks. We show that for a general convex optimization problem where a subset of variables lie in a polytope and the rest in a convex set, the dual-based algorithm remains stable and optimal when the constraint set is modulated by an irreducible finite-state Markov chain. This paper thus presents a step toward a systematic way to carry out cross-layer design in the framework of “layering as optimization decomposition” for time-varying channel models

    Computing Equilibrium in Matching Markets

    Full text link
    Market equilibria of matching markets offer an intuitive and fair solution for matching problems without money with agents who have preferences over the items. Such a matching market can be viewed as a variation of Fisher market, albeit with rather peculiar preferences of agents. These preferences can be described by piece-wise linear concave (PLC) functions, which however, are not separable (due to each agent only asking for one item), are not monotone, and do not satisfy the gross substitute property-- increase in price of an item can result in increased demand for the item. Devanur and Kannan in FOCS 08 showed that market clearing prices can be found in polynomial time in markets with fixed number of items and general PLC preferences. They also consider Fischer markets with fixed number of agents (instead of fixed number of items), and give a polynomial time algorithm for this case if preferences are separable functions of the items, in addition to being PLC functions. Our main result is a polynomial time algorithm for finding market clearing prices in matching markets with fixed number of different agent preferences, despite that the utility corresponding to matching markets is not separable. We also give a simpler algorithm for the case of matching markets with fixed number of different items

    General Bounds for Incremental Maximization

    Full text link
    We propose a theoretical framework to capture incremental solutions to cardinality constrained maximization problems. The defining characteristic of our framework is that the cardinality/support of the solution is bounded by a value kNk\in\mathbb{N} that grows over time, and we allow the solution to be extended one element at a time. We investigate the best-possible competitive ratio of such an incremental solution, i.e., the worst ratio over all kk between the incremental solution after kk steps and an optimum solution of cardinality kk. We define a large class of problems that contains many important cardinality constrained maximization problems like maximum matching, knapsack, and packing/covering problems. We provide a general 2.6182.618-competitive incremental algorithm for this class of problems, and show that no algorithm can have competitive ratio below 2.182.18 in general. In the second part of the paper, we focus on the inherently incremental greedy algorithm that increases the objective value as much as possible in each step. This algorithm is known to be 1.581.58-competitive for submodular objective functions, but it has unbounded competitive ratio for the class of incremental problems mentioned above. We define a relaxed submodularity condition for the objective function, capturing problems like maximum (weighted) (bb-)matching and a variant of the maximum flow problem. We show that the greedy algorithm has competitive ratio (exactly) 2.3132.313 for the class of problems that satisfy this relaxed submodularity condition. Note that our upper bounds on the competitive ratios translate to approximation ratios for the underlying cardinality constrained problems.Comment: fixed typo

    Efficiency in Matching Markets with Regional Caps: The Case of the Japan Residency Matching Program

    Get PDF
    In an attempt to increase the placement of medical residents to rural hospitals, the Japanese government recently introduced "regional caps" which restrict the total number of residents matched within each region of the country. The government modified the deferred acceptance mechanism incorporating the regional caps. This paper shows that the current mechanism may result in avoidable ineffciency and instability and proposes a better mechanism that improves upon it in terms of effciency and stability while meeting the regional caps. More broadly, the paper contributes to the general research agenda of matching and market design to address practical problems.medical residency matching, regional caps, the rural hospital theorem, sta- bility, strategy-proofness, matching with contracts

    Edit Distance: Sketching, Streaming and Document Exchange

    Full text link
    We show that in the document exchange problem, where Alice holds x{0,1}nx \in \{0,1\}^n and Bob holds y{0,1}ny \in \{0,1\}^n, Alice can send Bob a message of size O(K(log2K+logn))O(K(\log^2 K+\log n)) bits such that Bob can recover xx using the message and his input yy if the edit distance between xx and yy is no more than KK, and output "error" otherwise. Both the encoding and decoding can be done in time O~(n+poly(K))\tilde{O}(n+\mathsf{poly}(K)). This result significantly improves the previous communication bounds under polynomial encoding/decoding time. We also show that in the referee model, where Alice and Bob hold xx and yy respectively, they can compute sketches of xx and yy of sizes poly(Klogn)\mathsf{poly}(K \log n) bits (the encoding), and send to the referee, who can then compute the edit distance between xx and yy together with all the edit operations if the edit distance is no more than KK, and output "error" otherwise (the decoding). To the best of our knowledge, this is the first result for sketching edit distance using poly(Klogn)\mathsf{poly}(K \log n) bits. Moreover, the encoding phase of our sketching algorithm can be performed by scanning the input string in one pass. Thus our sketching algorithm also implies the first streaming algorithm for computing edit distance and all the edits exactly using poly(Klogn)\mathsf{poly}(K \log n) bits of space.Comment: Full version of an article to be presented at the 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2016
    corecore