624,091 research outputs found
Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory
We present an information-theoretic approach to lower bound the oracle
complexity of nonsmooth black box convex optimization, unifying previous lower
bounding techniques by identifying a combinatorial problem, namely string
guessing, as a single source of hardness. As a measure of complexity we use
distributional oracle complexity, which subsumes randomized oracle complexity
as well as worst-case oracle complexity. We obtain strong lower bounds on
distributional oracle complexity for the box , as well as for the
-ball for (for both low-scale and large-scale regimes),
matching worst-case upper bounds, and hence we close the gap between
distributional complexity, and in particular, randomized complexity, and
worst-case complexity. Furthermore, the bounds remain essentially the same for
high-probability and bounded-error oracle complexity, and even for combination
of the two, i.e., bounded-error high-probability oracle complexity. This
considerably extends the applicability of known bounds
On the probabilistic continuous complexity conjecture
In this paper we prove the probabilistic continuous complexity conjecture. In
continuous complexity theory, this states that the complexity of solving a
continuous problem with probability approaching 1 converges (in this limit) to
the complexity of solving the same problem in its worst case. We prove the
conjecture holds if and only if space of problem elements is uniformly convex.
The non-uniformly convex case has a striking counterexample in the problem of
identifying a Brownian path in Wiener space, where it is shown that
probabilistic complexity converges to only half of the worst case complexity in
this limit
Smoothed Complexity Theory
Smoothed analysis is a new way of analyzing algorithms introduced by Spielman
and Teng (J. ACM, 2004). Classical methods like worst-case or average-case
analysis have accompanying complexity classes, like P and AvgP, respectively.
While worst-case or average-case analysis give us a means to talk about the
running time of a particular algorithm, complexity classes allows us to talk
about the inherent difficulty of problems.
Smoothed analysis is a hybrid of worst-case and average-case analysis and
compensates some of their drawbacks. Despite its success for the analysis of
single algorithms and problems, there is no embedding of smoothed analysis into
computational complexity theory, which is necessary to classify problems
according to their intrinsic difficulty.
We propose a framework for smoothed complexity theory, define the relevant
classes, and prove some first hardness results (of bounded halting and tiling)
and tractability results (binary optimization problems, graph coloring,
satisfiability). Furthermore, we discuss extensions and shortcomings of our
model and relate it to semi-random models.Comment: to be presented at MFCS 201
SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities
Algorithmic complexity vulnerabilities occur when the worst-case time/space
complexity of an application is significantly higher than the respective
average case for particular user-controlled inputs. When such conditions are
met, an attacker can launch Denial-of-Service attacks against a vulnerable
application by providing inputs that trigger the worst-case behavior. Such
attacks have been known to have serious effects on production systems, take
down entire websites, or lead to bypasses of Web Application Firewalls.
Unfortunately, existing detection mechanisms for algorithmic complexity
vulnerabilities are domain-specific and often require significant manual
effort. In this paper, we design, implement, and evaluate SlowFuzz, a
domain-independent framework for automatically finding algorithmic complexity
vulnerabilities. SlowFuzz automatically finds inputs that trigger worst-case
algorithmic behavior in the tested binary. SlowFuzz uses resource-usage-guided
evolutionary search techniques to automatically find inputs that maximize
computational resource utilization for a given application.Comment: ACM CCS '17, October 30-November 3, 2017, Dallas, TX, US
Improved algorithm for computing separating linear forms for bivariate systems
We address the problem of computing a linear separating form of a system of
two bivariate polynomials with integer coefficients, that is a linear
combination of the variables that takes different values when evaluated at the
distinct solutions of the system. The computation of such linear forms is at
the core of most algorithms that solve algebraic systems by computing rational
parameterizations of the solutions and this is the bottleneck of these
algorithms in terms of worst-case bit complexity. We present for this problem a
new algorithm of worst-case bit complexity \sOB(d^7+d^6\tau) where and
denote respectively the maximum degree and bitsize of the input (and
where \sO refers to the complexity where polylogarithmic factors are omitted
and refers to the bit complexity). This algorithm simplifies and
decreases by a factor the worst-case bit complexity presented for this
problem by Bouzidi et al. \cite{bouzidiJSC2014a}. This algorithm also yields,
for this problem, a probabilistic Las-Vegas algorithm of expected bit
complexity \sOB(d^5+d^4\tau).Comment: ISSAC - 39th International Symposium on Symbolic and Algebraic
Computation (2014
Classical and quantum fingerprinting with shared randomness and one-sided error
Within the simultaneous message passing model of communication complexity,
under a public-coin assumption, we derive the minimum achievable worst-case
error probability of a classical fingerprinting protocol with one-sided error.
We then present entanglement-assisted quantum fingerprinting protocols
attaining worst-case error probabilities that breach this bound.Comment: 10 pages, 1 figur
Layered Fixed Point Logic
We present a logic for the specification of static analysis problems that
goes beyond the logics traditionally used. Its most prominent feature is the
direct support for both inductive computations of behaviors as well as
co-inductive specifications of properties. Two main theoretical contributions
are a Moore Family result and a parametrized worst case time complexity result.
We show that the logic and the associated solver can be used for rapid
prototyping and illustrate a wide variety of applications within Static
Analysis, Constraint Satisfaction Problems and Model Checking. In all cases the
complexity result specializes to the worst case time complexity of the
classical methods
- …
