15,586 research outputs found
Time-Space Tradeoffs for the Memory Game
A single-player game of Memory is played with distinct pairs of cards,
with the cards in each pair bearing identical pictures. The cards are laid
face-down. A move consists of revealing two cards, chosen adaptively. If these
cards match, i.e., they bear the same picture, they are removed from play;
otherwise, they are turned back to face down. The object of the game is to
clear all cards while minimizing the number of moves. Past works have
thoroughly studied the expected number of moves required, assuming optimal play
by a player has that has perfect memory. In this work, we study the Memory game
in a space-bounded setting.
We prove two time-space tradeoff lower bounds on algorithms (strategies for
the player) that clear all cards in moves while using at most bits of
memory. First, in a simple model where the pictures on the cards may only be
compared for equality, we prove that . This is tight:
it is easy to achieve essentially everywhere on this
tradeoff curve. Second, in a more general model that allows arbitrary
computations, we prove that . We prove this latter tradeoff
by modeling strategies as branching programs and extending a classic counting
argument of Borodin and Cook with a novel probabilistic argument. We conjecture
that the stronger tradeoff in fact holds even in
this general model
Element Distinctness, Frequency Moments, and Sliding Windows
We derive new time-space tradeoff lower bounds and algorithms for exactly
computing statistics of input data, including frequency moments, element
distinctness, and order statistics, that are simple to calculate for sorted
data. We develop a randomized algorithm for the element distinctness problem
whose time T and space S satisfy T in O (n^{3/2}/S^{1/2}), smaller than
previous lower bounds for comparison-based algorithms, showing that element
distinctness is strictly easier than sorting for randomized branching programs.
This algorithm is based on a new time and space efficient algorithm for finding
all collisions of a function f from a finite set to itself that are reachable
by iterating f from a given set of starting points. We further show that our
element distinctness algorithm can be extended at only a polylogarithmic factor
cost to solve the element distinctness problem over sliding windows, where the
task is to take an input of length 2n-1 and produce an output for each window
of length n, giving n outputs in total. In contrast, we show a time-space
tradeoff lower bound of T in Omega(n^2/S) for randomized branching programs to
compute the number of distinct elements over sliding windows. The same lower
bound holds for computing the low-order bit of F_0 and computing any frequency
moment F_k, k neq 1. This shows that those frequency moments and the decision
problem F_0 mod 2 are strictly harder than element distinctness. We complement
this lower bound with a T in O(n^2/S) comparison-based deterministic RAM
algorithm for exactly computing F_k over sliding windows, nearly matching both
our lower bound for the sliding-window version and the comparison-based lower
bounds for the single-window version. We further exhibit a quantum algorithm
for F_0 over sliding windows with T in O(n^{3/2}/S^{1/2}). Finally, we consider
the computations of order statistics over sliding windows.Comment: arXiv admin note: substantial text overlap with arXiv:1212.437
Finding the Median (Obliviously) with Bounded Space
We prove that any oblivious algorithm using space to find the median of a
list of integers from requires time . This bound also applies to the problem of determining whether the median
is odd or even. It is nearly optimal since Chan, following Munro and Raman, has
shown that there is a (randomized) selection algorithm using only
registers, each of which can store an input value or -bit counter,
that makes only passes over the input. The bound also implies
a size lower bound for read-once branching programs computing the low order bit
of the median and implies the analog of for length oblivious branching programs
Realtime Profiling of Fine-Grained Air Quality Index Distribution using UAV Sensing
Given significant air pollution problems, air quality index (AQI) monitoring
has recently received increasing attention. In this paper, we design a mobile
AQI monitoring system boarded on unmanned-aerial-vehicles (UAVs), called ARMS,
to efficiently build fine-grained AQI maps in realtime. Specifically, we first
propose the Gaussian plume model on basis of the neural network (GPM-NN), to
physically characterize the particle dispersion in the air. Based on GPM-NN, we
propose a battery efficient and adaptive monitoring algorithm to monitor AQI at
the selected locations and construct an accurate AQI map with the sensed data.
The proposed adaptive monitoring algorithm is evaluated in two typical
scenarios, a two-dimensional open space like a roadside park, and a
three-dimensional space like a courtyard inside a building. Experimental
results demonstrate that our system can provide higher prediction accuracy of
AQI with GPM-NN than other existing models, while greatly reducing the power
consumption with the adaptive monitoring algorithm
Adaptive Network Coding for Scheduling Real-time Traffic with Hard Deadlines
We study adaptive network coding (NC) for scheduling real-time traffic over a
single-hop wireless network. To meet the hard deadlines of real-time traffic,
it is critical to strike a balance between maximizing the throughput and
minimizing the risk that the entire block of coded packets may not be decodable
by the deadline. Thus motivated, we explore adaptive NC, where the block size
is adapted based on the remaining time to the deadline, by casting this
sequential block size adaptation problem as a finite-horizon Markov decision
process. One interesting finding is that the optimal block size and its
corresponding action space monotonically decrease as the deadline approaches,
and the optimal block size is bounded by the "greedy" block size. These unique
structures make it possible to narrow down the search space of dynamic
programming, building on which we develop a monotonicity-based backward
induction algorithm (MBIA) that can solve for the optimal block size in
polynomial time. Since channel erasure probabilities would be time-varying in a
mobile network, we further develop a joint real-time scheduling and channel
learning scheme with adaptive NC that can adapt to channel dynamics. We also
generalize the analysis to multiple flows with hard deadlines and long-term
delivery ratio constraints, devise a low-complexity online scheduling algorithm
integrated with the MBIA, and then establish its asymptotical
throughput-optimality. In addition to analysis and simulation results, we
perform high fidelity wireless emulation tests with real radio transmissions to
demonstrate the feasibility of the MBIA in finding the optimal block size in
real time.Comment: 11 pages, 13 figure
Memory-Adjustable Navigation Piles with Applications to Sorting and Convex Hulls
We consider space-bounded computations on a random-access machine (RAM) where
the input is given on a read-only random-access medium, the output is to be
produced to a write-only sequential-access medium, and the available workspace
allows random reads and writes but is of limited capacity. The length of the
input is elements, the length of the output is limited by the computation,
and the capacity of the workspace is bits for some predetermined
parameter . We present a state-of-the-art priority queue---called an
adjustable navigation pile---for this restricted RAM model. Under some
reasonable assumptions, our priority queue supports and
in worst-case time and in worst-case time for any . We show how to use this
data structure to sort elements and to compute the convex hull of
points in the two-dimensional Euclidean space in
worst-case time for any . Following a known lower bound for the
space-time product of any branching program for finding unique elements, both
our sorting and convex-hull algorithms are optimal. The adjustable navigation
pile has turned out to be useful when designing other space-efficient
algorithms, and we expect that it will find its way to yet other applications.Comment: 21 page
Reduced-order modeling using Dynamic Mode Decomposition and Least Angle Regression
Dynamic Mode Decomposition (DMD) yields a linear, approximate model of a
system's dynamics that is built from data. We seek to reduce the order of this
model by identifying a reduced set of modes that best fit the output. We adopt
a model selection algorithm from statistics and machine learning known as Least
Angle Regression (LARS). We modify LARS to be complex-valued and utilize LARS
to select DMD modes. We refer to the resulting algorithm as Least Angle
Regression for Dynamic Mode Decomposition (LARS4DMD). Sparsity-Promoting
Dynamic Mode Decomposition (DMDSP), a popular mode-selection algorithm, serves
as a benchmark for comparison. Numerical results from a Poiseuille flow test
problem show that LARS4DMD yields reduced-order models that have comparable
performance to DMDSP. LARS4DMD has the added benefit that the regularization
weighting parameter required for DMDSP is not needed.Comment: 14 pages, 2 Figures, Submitted to AIAA Aviation Conference 201
- …