15,586 research outputs found

    Time-Space Tradeoffs for the Memory Game

    Get PDF
    A single-player game of Memory is played with nn distinct pairs of cards, with the cards in each pair bearing identical pictures. The cards are laid face-down. A move consists of revealing two cards, chosen adaptively. If these cards match, i.e., they bear the same picture, they are removed from play; otherwise, they are turned back to face down. The object of the game is to clear all cards while minimizing the number of moves. Past works have thoroughly studied the expected number of moves required, assuming optimal play by a player has that has perfect memory. In this work, we study the Memory game in a space-bounded setting. We prove two time-space tradeoff lower bounds on algorithms (strategies for the player) that clear all cards in TT moves while using at most SS bits of memory. First, in a simple model where the pictures on the cards may only be compared for equality, we prove that ST=Ω(n2logn)ST = \Omega(n^2 \log n). This is tight: it is easy to achieve ST=O(n2logn)ST = O(n^2 \log n) essentially everywhere on this tradeoff curve. Second, in a more general model that allows arbitrary computations, we prove that ST2=Ω(n3)ST^2 = \Omega(n^3). We prove this latter tradeoff by modeling strategies as branching programs and extending a classic counting argument of Borodin and Cook with a novel probabilistic argument. We conjecture that the stronger tradeoff ST=Ω~(n2)ST = \widetilde{\Omega}(n^2) in fact holds even in this general model

    Element Distinctness, Frequency Moments, and Sliding Windows

    Full text link
    We derive new time-space tradeoff lower bounds and algorithms for exactly computing statistics of input data, including frequency moments, element distinctness, and order statistics, that are simple to calculate for sorted data. We develop a randomized algorithm for the element distinctness problem whose time T and space S satisfy T in O (n^{3/2}/S^{1/2}), smaller than previous lower bounds for comparison-based algorithms, showing that element distinctness is strictly easier than sorting for randomized branching programs. This algorithm is based on a new time and space efficient algorithm for finding all collisions of a function f from a finite set to itself that are reachable by iterating f from a given set of starting points. We further show that our element distinctness algorithm can be extended at only a polylogarithmic factor cost to solve the element distinctness problem over sliding windows, where the task is to take an input of length 2n-1 and produce an output for each window of length n, giving n outputs in total. In contrast, we show a time-space tradeoff lower bound of T in Omega(n^2/S) for randomized branching programs to compute the number of distinct elements over sliding windows. The same lower bound holds for computing the low-order bit of F_0 and computing any frequency moment F_k, k neq 1. This shows that those frequency moments and the decision problem F_0 mod 2 are strictly harder than element distinctness. We complement this lower bound with a T in O(n^2/S) comparison-based deterministic RAM algorithm for exactly computing F_k over sliding windows, nearly matching both our lower bound for the sliding-window version and the comparison-based lower bounds for the single-window version. We further exhibit a quantum algorithm for F_0 over sliding windows with T in O(n^{3/2}/S^{1/2}). Finally, we consider the computations of order statistics over sliding windows.Comment: arXiv admin note: substantial text overlap with arXiv:1212.437

    Finding the Median (Obliviously) with Bounded Space

    Full text link
    We prove that any oblivious algorithm using space SS to find the median of a list of nn integers from {1,...,2n}\{1,...,2n\} requires time Ω(nloglogSn)\Omega(n \log\log_S n). This bound also applies to the problem of determining whether the median is odd or even. It is nearly optimal since Chan, following Munro and Raman, has shown that there is a (randomized) selection algorithm using only ss registers, each of which can store an input value or O(logn)O(\log n)-bit counter, that makes only O(loglogsn)O(\log\log_s n) passes over the input. The bound also implies a size lower bound for read-once branching programs computing the low order bit of the median and implies the analog of PNPcoNPP \ne NP \cap coNP for length o(nloglogn)o(n \log\log n) oblivious branching programs

    Realtime Profiling of Fine-Grained Air Quality Index Distribution using UAV Sensing

    Full text link
    Given significant air pollution problems, air quality index (AQI) monitoring has recently received increasing attention. In this paper, we design a mobile AQI monitoring system boarded on unmanned-aerial-vehicles (UAVs), called ARMS, to efficiently build fine-grained AQI maps in realtime. Specifically, we first propose the Gaussian plume model on basis of the neural network (GPM-NN), to physically characterize the particle dispersion in the air. Based on GPM-NN, we propose a battery efficient and adaptive monitoring algorithm to monitor AQI at the selected locations and construct an accurate AQI map with the sensed data. The proposed adaptive monitoring algorithm is evaluated in two typical scenarios, a two-dimensional open space like a roadside park, and a three-dimensional space like a courtyard inside a building. Experimental results demonstrate that our system can provide higher prediction accuracy of AQI with GPM-NN than other existing models, while greatly reducing the power consumption with the adaptive monitoring algorithm

    Adaptive Network Coding for Scheduling Real-time Traffic with Hard Deadlines

    Full text link
    We study adaptive network coding (NC) for scheduling real-time traffic over a single-hop wireless network. To meet the hard deadlines of real-time traffic, it is critical to strike a balance between maximizing the throughput and minimizing the risk that the entire block of coded packets may not be decodable by the deadline. Thus motivated, we explore adaptive NC, where the block size is adapted based on the remaining time to the deadline, by casting this sequential block size adaptation problem as a finite-horizon Markov decision process. One interesting finding is that the optimal block size and its corresponding action space monotonically decrease as the deadline approaches, and the optimal block size is bounded by the "greedy" block size. These unique structures make it possible to narrow down the search space of dynamic programming, building on which we develop a monotonicity-based backward induction algorithm (MBIA) that can solve for the optimal block size in polynomial time. Since channel erasure probabilities would be time-varying in a mobile network, we further develop a joint real-time scheduling and channel learning scheme with adaptive NC that can adapt to channel dynamics. We also generalize the analysis to multiple flows with hard deadlines and long-term delivery ratio constraints, devise a low-complexity online scheduling algorithm integrated with the MBIA, and then establish its asymptotical throughput-optimality. In addition to analysis and simulation results, we perform high fidelity wireless emulation tests with real radio transmissions to demonstrate the feasibility of the MBIA in finding the optimal block size in real time.Comment: 11 pages, 13 figure

    Memory-Adjustable Navigation Piles with Applications to Sorting and Convex Hulls

    Get PDF
    We consider space-bounded computations on a random-access machine (RAM) where the input is given on a read-only random-access medium, the output is to be produced to a write-only sequential-access medium, and the available workspace allows random reads and writes but is of limited capacity. The length of the input is NN elements, the length of the output is limited by the computation, and the capacity of the workspace is O(S)O(S) bits for some predetermined parameter SS. We present a state-of-the-art priority queue---called an adjustable navigation pile---for this restricted RAM model. Under some reasonable assumptions, our priority queue supports minimum\mathit{minimum} and insert\mathit{insert} in O(1)O(1) worst-case time and extract\mathit{extract} in O(N/S+lgS)O(N/S + \lg{} S) worst-case time for any SlgNS \geq \lg{} N. We show how to use this data structure to sort NN elements and to compute the convex hull of NN points in the two-dimensional Euclidean space in O(N2/S+NlgS)O(N^2/S + N \lg{} S) worst-case time for any SlgNS \geq \lg{} N. Following a known lower bound for the space-time product of any branching program for finding unique elements, both our sorting and convex-hull algorithms are optimal. The adjustable navigation pile has turned out to be useful when designing other space-efficient algorithms, and we expect that it will find its way to yet other applications.Comment: 21 page

    Reduced-order modeling using Dynamic Mode Decomposition and Least Angle Regression

    Full text link
    Dynamic Mode Decomposition (DMD) yields a linear, approximate model of a system's dynamics that is built from data. We seek to reduce the order of this model by identifying a reduced set of modes that best fit the output. We adopt a model selection algorithm from statistics and machine learning known as Least Angle Regression (LARS). We modify LARS to be complex-valued and utilize LARS to select DMD modes. We refer to the resulting algorithm as Least Angle Regression for Dynamic Mode Decomposition (LARS4DMD). Sparsity-Promoting Dynamic Mode Decomposition (DMDSP), a popular mode-selection algorithm, serves as a benchmark for comparison. Numerical results from a Poiseuille flow test problem show that LARS4DMD yields reduced-order models that have comparable performance to DMDSP. LARS4DMD has the added benefit that the regularization weighting parameter required for DMDSP is not needed.Comment: 14 pages, 2 Figures, Submitted to AIAA Aviation Conference 201
    corecore