64 research outputs found

    A Massively Parallel Dynamic Programming for Approximate Rectangle Escape Problem

    Full text link
    Sublinear time complexity is required by the massively parallel computation (MPC) model. Breaking dynamic programs into a set of sparse dynamic programs that can be divided, solved, and merged in sublinear time. The rectangle escape problem (REP) is defined as follows: For nn axis-aligned rectangles inside an axis-aligned bounding box BB, extend each rectangle in only one of the four directions: up, down, left, or right until it reaches BB and the density kk is minimized, where kk is the maximum number of extensions of rectangles to the boundary that pass through a point inside bounding box BB. REP is NP-hard for k>1k>1. If the rectangles are points of a grid (or unit squares of a grid), the problem is called the square escape problem (SEP) and it is still NP-hard. We give a 22-approximation algorithm for SEP with k2k\geq2 with time complexity O(n3/2k2)O(n^{3/2}k^2). This improves the time complexity of existing algorithms which are at least quadratic. Also, the approximation ratio of our algorithm for k3k\geq 3 is 3/23/2 which is tight. We also give a 88-approximation algorithm for REP with time complexity O(nlogn+nk)O(n\log n+nk) and give a MPC version of this algorithm for k=O(1)k=O(1) which is the first parallel algorithm for this problem

    An Algorithm for the Maximum Weight Independent Set Problem on Outerstring Graphs

    Get PDF
    Outerstring graphs are the intersection graphs of curves that lie inside a disk such that each curve intersects the boundary of the disk. Outerstring graphs are among the most general classes of intersection graphs studied. To date, no polynomial time algorithm is known for any of the classical graph optimization problems on outerstring graphs; in fact, most are NP-hard. It is known that there is an intersection model for any outerstring graph that consists of polygonal arcs attached to a circle. However, this representation may require an exponential number of segments relative to the size of the graph. Given an outerstring graph and an intersection model consisting of polygonal arcs with a total of N segments, we develop an algorithm that solves the Maximum Weight Independent Set problem in O(N³) time. If the polygonal arcs are restricted to single segments, then outersegment graphs result. For outersegment graphs, we solve the Maximum Weight Independent Set problem in O(n³) time where n is the number of vertices in the graph

    Synthesis of Digital Microfluidic Biochips with Reconfigurable Operation Execution

    Get PDF

    Asymptotic properties of wireless multi-hop networks

    Get PDF
    In this dissertation, we consider wireless multi-hop networks, where the nodes are randomly placed. We are particularly interested in their asymptotic properties when the number of nodes tends to infinity. We use percolation theory as our main tool of analysis. As a first model, we assume that nodes have a fixed connectivity range, and can establish wireless links to all nodes within this range, but no other (Boolean model). We compute for one-dimensional networks the probability that two nodes are connected, given the distance between them. We show that this probability tends exponentially to zero when the distance increases, proving that pure multi-hopping does not work in large networks. In two dimensions however, an unbounded cluster of connected nodes forms if the node density is above a critical threshold (super-critical phase). This is known as the percolation phenomenon. This cluster contains a positive fraction of the nodes that depends on the node density, and remains constant as the network size increases. Furthermore, the fraction of connected nodes tends rapidly to one when the node density is above the threshold. We compare this partial connectivity to full connectivity, and show that the requirement for full connectivity leads to vanishing throughput when the network size increases. In contrast, partial connectivity is perfectly scalable, at the cost of a tiny fraction of the nodes being disconnected. We consider two other connectivity models. The first one is a signal-to-interference- plus-noise-ratio based connectivity graph (STIRG). In this model, we assume deterministic attenuation of the signals as a function of distance. We prove that percolation occurs in this model in a similar way as in the previous model, and study in detail the domain of parameters where it occurs. We show in particular that the assumptions on the attenuation function dramatically impact the results: the commonly used power-law attenuation leads to particular symmetry properties. However, physics imposes that the received signal cannot be stronger than the emitted signal, implying a bounded attenuation function. We observe that percolation is harder to achieve in most cases with such an attenuation function. The second model is an information theoretic view on connectivity, where two arbitrary nodes are considered connected if it is possible to transmit data from one to the other at a given rate. We show that in this model the same partial connectivity can be achieved in a scalable way as in the Boolean model. This result is however a pure connectivity result in the sense that there is no competition and interferences between data flows. We also look at the other extreme, the Gupta and Kumar scenario, where all nodes want to transmit data simultaneously. We show first that under point-to-point communication and bounded attenuation function the total transport capacity of a fixed area network is bounded from above by a constant, whatever the number of nodes may be. However, if the network area increases linearly with the number of nodes (constant density), or if we assume power-law attenuation function, a throughput per node of order 1/√n can be achieved. This latter result improves the existing results about random networks by a factor (log n)1/2. In the last part of this dissertation, we address two problems related to latency. The first one is an intruder detection scenario, where a static sensor network has to detect an intruder that moves with constant speed along a straight line. We compute an upper bound to the time needed to detect the intruder, under the assumption that detection by disconnected sensors does not count. In the second scenario, sensors switch off their radio device for random periods, in order to save energy. This affects the delivery of alert messages, since they may have to wait for relays to turn on their radio to move further. We show that asymptotically, alert messages propagate with constant, deterministic speed in such networks

    LIPIcs, Volume 258, SoCG 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 258, SoCG 2023, Complete Volum

    Low power data-dependent transform video and still image coding

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 139-144).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.This work introduces the idea of data dependent video coding for low power. Algorithms for the Discrete Cosine Transform (DCT) and its inverse are introduced which exploit statistical properties of the input data in both the space and spatial frequency domains in order to minimize the total number of arithmetic operations. Two VLSI chips have been built as a proof-of-concept of data dependent processing implementing the DCT and its inverse (IDCT). The IDCT core processor exploits the presence of a large number of zerovalued spectral coefficients in the input stream when stimulated with MPEG-compressed video sequences. Adata-driven IDCT computation algorithm along with clock gating techniques are used to reduce the number of arithmetic operations for video inputs. The second chip is a DCT core processor that exhibits two innovative techniques for arithmetic operation reduction in the DCT computation context along with standard voltage scaling techniques such as pipelining and parallelism. The first method reduces the bitwidth of arithmetic operations in the presence of data spatial correlation. The second method trades off power dissipation and image compression quality (arithmetic precision.) Both chips are fully functional and exhibit the lowest switched capacitance per sample among past DCT/IDCT chips reported in the literature. Their power dissipation profile shows a strong dependence with certain statistical properties of the data that they operate on, according to the design goal.by Thucydides Xanthopoulos.Ph.D
    corecore