68,249 research outputs found

    Randomized Composable Core-sets for Distributed Submodular Maximization

    Full text link
    An effective technique for solving optimization problems over massive data sets is to partition the data into smaller pieces, solve the problem on each piece and compute a representative solution from it, and finally obtain a solution inside the union of the representative solutions for all pieces. This technique can be captured via the concept of {\em composable core-sets}, and has been recently applied to solve diversity maximization problems as well as several clustering problems. However, for coverage and submodular maximization problems, impossibility bounds are known for this technique \cite{IMMM14}. In this paper, we focus on efficient construction of a randomized variant of composable core-sets where the above idea is applied on a {\em random clustering} of the data. We employ this technique for the coverage, monotone and non-monotone submodular maximization problems. Our results significantly improve upon the hardness results for non-randomized core-sets, and imply improved results for submodular maximization in a distributed and streaming settings. In summary, we show that a simple greedy algorithm results in a 1/31/3-approximate randomized composable core-set for submodular maximization under a cardinality constraint. This is in contrast to a known O(logkk)O({\log k\over \sqrt{k}}) impossibility result for (non-randomized) composable core-set. Our result also extends to non-monotone submodular functions, and leads to the first 2-round MapReduce-based constant-factor approximation algorithm with O(n)O(n) total communication complexity for either monotone or non-monotone functions. Finally, using an improved analysis technique and a new algorithm PseudoGreedy\mathsf{PseudoGreedy}, we present an improved 0.5450.545-approximation algorithm for monotone submodular maximization, which is in turn the first MapReduce-based algorithm beating factor 1/21/2 in a constant number of rounds

    A Logical Characterization of Constant-Depth Circuits over the Reals

    Full text link
    In this paper we give an Immerman's Theorem for real-valued computation. We define circuits operating over real numbers and show that families of such circuits of polynomial size and constant depth decide exactly those sets of vectors of reals that can be defined in first-order logic on R-structures in the sense of Cucker and Meer. Our characterization holds both non-uniformily as well as for many natural uniformity conditions.Comment: 24 pages, submitted to WoLLIC 202

    Implementability Among Predicates

    Get PDF
    Much work has been done to understand when given predicates (relations) on discrete variables can be conjoined to implement other predicates. Indeed, the lattice of "co-clones" (sets of predicates closed under conjunction, variable renaming, and existential quantification of variables) has been investigated steadily from the 1960's to the present. Here, we investigate a more general model, where duplicatability of values is not taken for granted. This model is motivated in part by large scale neural models, where duplicating a value is similar in cost to computing a function, and by quantum mechanics, where values cannot be duplicated. Implementations in this case are naturally given by a graph fragment in which vertices are predicates, internal edges are existentially quantified variables, and "dangling edges" (edges emanating from a vertex but not yet connected to another vertex) are the free variables of the implemented predicate. We examine questions of implementability among predicates in this scenario, and we present the solution to all implementability problems for single predicates on up to three boolean values. However, we find that a variety of proof methods are required, and the question of implementability indeed becomes undecidable for larger predicates, although this is tricky to prove. We find that most predicates cannot implement the 3-way equality predicate, which reaffirms the view that duplicatability of values should not be assumed a priori

    Computational Processes and Incompleteness

    Full text link
    We introduce a formal definition of Wolfram's notion of computational process based on cellular automata, a physics-like model of computation. There is a natural classification of these processes into decidable, intermediate and complete. It is shown that in the context of standard finite injury priority arguments one cannot establish the existence of an intermediate computational process
    corecore