623,344 research outputs found

    Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory

    Full text link
    We present an information-theoretic approach to lower bound the oracle complexity of nonsmooth black box convex optimization, unifying previous lower bounding techniques by identifying a combinatorial problem, namely string guessing, as a single source of hardness. As a measure of complexity we use distributional oracle complexity, which subsumes randomized oracle complexity as well as worst-case oracle complexity. We obtain strong lower bounds on distributional oracle complexity for the box [1,1]n[-1,1]^n, as well as for the LpL^p-ball for p1p \geq 1 (for both low-scale and large-scale regimes), matching worst-case upper bounds, and hence we close the gap between distributional complexity, and in particular, randomized complexity, and worst-case complexity. Furthermore, the bounds remain essentially the same for high-probability and bounded-error oracle complexity, and even for combination of the two, i.e., bounded-error high-probability oracle complexity. This considerably extends the applicability of known bounds

    On the probabilistic continuous complexity conjecture

    Full text link
    In this paper we prove the probabilistic continuous complexity conjecture. In continuous complexity theory, this states that the complexity of solving a continuous problem with probability approaching 1 converges (in this limit) to the complexity of solving the same problem in its worst case. We prove the conjecture holds if and only if space of problem elements is uniformly convex. The non-uniformly convex case has a striking counterexample in the problem of identifying a Brownian path in Wiener space, where it is shown that probabilistic complexity converges to only half of the worst case complexity in this limit

    Smoothed Complexity Theory

    Get PDF
    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like P and AvgP, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first hardness results (of bounded halting and tiling) and tractability results (binary optimization problems, graph coloring, satisfiability). Furthermore, we discuss extensions and shortcomings of our model and relate it to semi-random models.Comment: to be presented at MFCS 201

    SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities

    Full text link
    Algorithmic complexity vulnerabilities occur when the worst-case time/space complexity of an application is significantly higher than the respective average case for particular user-controlled inputs. When such conditions are met, an attacker can launch Denial-of-Service attacks against a vulnerable application by providing inputs that trigger the worst-case behavior. Such attacks have been known to have serious effects on production systems, take down entire websites, or lead to bypasses of Web Application Firewalls. Unfortunately, existing detection mechanisms for algorithmic complexity vulnerabilities are domain-specific and often require significant manual effort. In this paper, we design, implement, and evaluate SlowFuzz, a domain-independent framework for automatically finding algorithmic complexity vulnerabilities. SlowFuzz automatically finds inputs that trigger worst-case algorithmic behavior in the tested binary. SlowFuzz uses resource-usage-guided evolutionary search techniques to automatically find inputs that maximize computational resource utilization for a given application.Comment: ACM CCS '17, October 30-November 3, 2017, Dallas, TX, US

    Improved algorithm for computing separating linear forms for bivariate systems

    Get PDF
    We address the problem of computing a linear separating form of a system of two bivariate polynomials with integer coefficients, that is a linear combination of the variables that takes different values when evaluated at the distinct solutions of the system. The computation of such linear forms is at the core of most algorithms that solve algebraic systems by computing rational parameterizations of the solutions and this is the bottleneck of these algorithms in terms of worst-case bit complexity. We present for this problem a new algorithm of worst-case bit complexity \sOB(d^7+d^6\tau) where dd and τ\tau denote respectively the maximum degree and bitsize of the input (and where \sO refers to the complexity where polylogarithmic factors are omitted and OBO_B refers to the bit complexity). This algorithm simplifies and decreases by a factor dd the worst-case bit complexity presented for this problem by Bouzidi et al. \cite{bouzidiJSC2014a}. This algorithm also yields, for this problem, a probabilistic Las-Vegas algorithm of expected bit complexity \sOB(d^5+d^4\tau).Comment: ISSAC - 39th International Symposium on Symbolic and Algebraic Computation (2014

    Classical and quantum fingerprinting with shared randomness and one-sided error

    Full text link
    Within the simultaneous message passing model of communication complexity, under a public-coin assumption, we derive the minimum achievable worst-case error probability of a classical fingerprinting protocol with one-sided error. We then present entanglement-assisted quantum fingerprinting protocols attaining worst-case error probabilities that breach this bound.Comment: 10 pages, 1 figur

    Golden Coded Multiple Beamforming

    Full text link
    The Golden Code is a full-rate full-diversity space-time code, which achieves maximum coding gain for Multiple-Input Multiple-Output (MIMO) systems with two transmit and two receive antennas. Since four information symbols taken from an M-QAM constellation are selected to construct one Golden Code codeword, a maximum likelihood decoder using sphere decoding has the worst-case complexity of O(M^4), when the Channel State Information (CSI) is available at the receiver. Previously, this worst-case complexity was reduced to O(M^(2.5)) without performance degradation. When the CSI is known by the transmitter as well as the receiver, beamforming techniques that employ singular value decomposition are commonly used in MIMO systems. In the absence of channel coding, when a single symbol is transmitted, these systems achieve the full diversity order provided by the channel. Whereas this property is lost when multiple symbols are simultaneously transmitted. However, uncoded multiple beamforming can achieve the full diversity order by adding a properly designed constellation precoder. For 2 \times 2 Fully Precoded Multiple Beamforming (FPMB), the general worst-case decoding complexity is O(M). In this paper, Golden Coded Multiple Beamforming (GCMB) is proposed, which transmits the Golden Code through 2 \times 2 multiple beamforming. GCMB achieves the full diversity order and its performance is similar to general MIMO systems using the Golden Code and FPMB, whereas the worst-case decoding complexity of O(sqrt(M)) is much lower. The extension of GCMB to larger dimensions is also discussed.Comment: accepted to conferenc
    corecore