594 research outputs found
Time-Space Tradeoffs for the Memory Game
A single-player game of Memory is played with distinct pairs of cards,
with the cards in each pair bearing identical pictures. The cards are laid
face-down. A move consists of revealing two cards, chosen adaptively. If these
cards match, i.e., they bear the same picture, they are removed from play;
otherwise, they are turned back to face down. The object of the game is to
clear all cards while minimizing the number of moves. Past works have
thoroughly studied the expected number of moves required, assuming optimal play
by a player has that has perfect memory. In this work, we study the Memory game
in a space-bounded setting.
We prove two time-space tradeoff lower bounds on algorithms (strategies for
the player) that clear all cards in moves while using at most bits of
memory. First, in a simple model where the pictures on the cards may only be
compared for equality, we prove that . This is tight:
it is easy to achieve essentially everywhere on this
tradeoff curve. Second, in a more general model that allows arbitrary
computations, we prove that . We prove this latter tradeoff
by modeling strategies as branching programs and extending a classic counting
argument of Borodin and Cook with a novel probabilistic argument. We conjecture
that the stronger tradeoff in fact holds even in
this general model
On Convex Least Squares Estimation when the Truth is Linear
We prove that the convex least squares estimator (LSE) attains a
pointwise rate of convergence in any region where the truth is linear. In
addition, the asymptotic distribution can be characterized by a modified
invelope process. Analogous results hold when one uses the derivative of the
convex LSE to perform derivative estimation. These asymptotic results
facilitate a new consistent testing procedure on the linearity against a convex
alternative. Moreover, we show that the convex LSE adapts to the optimal rate
at the boundary points of the region where the truth is linear, up to a log-log
factor. These conclusions are valid in the context of both density estimation
and regression function estimation.Comment: 35 pages, 5 figure
Dynamic Assortment Optimization with Changing Contextual Information
In this paper, we study the dynamic assortment optimization problem under a
finite selling season of length . At each time period, the seller offers an
arriving customer an assortment of substitutable products under a cardinality
constraint, and the customer makes the purchase among offered products
according to a discrete choice model. Most existing work associates each
product with a real-valued fixed mean utility and assumes a multinomial logit
choice (MNL) model. In many practical applications, feature/contexutal
information of products is readily available. In this paper, we incorporate the
feature information by assuming a linear relationship between the mean utility
and the feature. In addition, we allow the feature information of products to
change over time so that the underlying choice model can also be
non-stationary. To solve the dynamic assortment optimization under this
changing contextual MNL model, we need to simultaneously learn the underlying
unknown coefficient and makes the decision on the assortment. To this end, we
develop an upper confidence bound (UCB) based policy and establish the regret
bound on the order of , where is the dimension of
the feature and suppresses logarithmic dependence. We further
established the lower bound where is the cardinality
constraint of an offered assortment, which is usually small. When is a
constant, our policy is optimal up to logarithmic factors. In the exploitation
phase of the UCB algorithm, we need to solve a combinatorial optimization for
assortment optimization based on the learned information. We further develop an
approximation algorithm and an efficient greedy heuristic. The effectiveness of
the proposed policy is further demonstrated by our numerical studies.Comment: 4 pages, 4 figures. Minor revision and polishing of presentatio
The Geometry of Triangles
In this article we make the concept of a continuous family of triangles
precise and prove the moduli functor classifying oriented triangles admits a
fine moduli space but the functor classifying non-oriented triangles only
admits a coarse moduli space. We hope moduli spaces of triangles can help
understand stacks
Jump or kink: note on super-efficiency in segmented linear regression break-point estimation
We consider the problem of segmented linear regression with a single breakpoint, with the focus on estimating the location of the breakpoint. If is the sample size, we show that the global minimax convergence rate for this problem in terms of the mean absolute error is . On the other hand, we demonstrate the construction of a super-efficient estimator that achieves the pointwise convergence rate of either or for every fixed parameter value, depending on whether the structural change is a jump or a kink. The implications of this example and a potential remedy are discussed
- β¦