1 research outputs found
The Fine-Grained and Parallel Complexity of Andersen's Pointer Analysis
Pointer analysis is one of the fundamental problems in static program
analysis. Given a set of pointers, the task is to produce a useful
over-approximation of the memory locations that each pointer may point-to at
runtime. The most common formulation is Andersen's Pointer Analysis (APA),
defined as an inclusion-based set of pointer constraints over a set of
pointers. Existing algorithms solve APA in time, while it has
been conjectured that the problem has no truly sub-cubic algorithm, with a
proof so far having remained elusive. Besides this simple bound, the complexity
of the problem has remained poorly understood.
In this work we draw a rich fine-grained and parallel complexity landscape of
APA, and present upper and lower bounds. First, we establish an
upper-bound for general APA, improving over as .
Second, we show that even on-demand APA (``may a \emph{specific} pointer
point to a \emph{specific} location ?'') has an
(combinatorial) lower bound under standard complexity-theoretic hypotheses.
This formally establishes the long-conjectured ``cubic bottleneck'' of APA, and
shows that our -time algorithm is optimal. Third, we show that under
mild restrictions, APA is solvable in time, where
is the matrix-multiplication coefficient. It is believed that
, in which case this bound becomes quadratic. Fourth, we show
that even under such restrictions, even the on-demand problem has an
lower bound, and hence our algorithm is optimal when
. Fifth, we study the parallelizability of APA and establish
lower and upper bounds: (i) in general, the problem is P-complete and hence not
parallelizable, whereas (ii) under mild restrictions, the problem is in NC and
hence parallelizable