462 research outputs found

    Characterizations of discrete Sugeno integrals as polynomial functions over distributive lattices

    Get PDF
    We give several characterizations of discrete Sugeno integrals over bounded distributive lattices, as particular cases of lattice polynomial functions, that is, functions which can be represented in the language of bounded lattices using variables and constants. We also consider the subclass of term functions as well as the classes of symmetric polynomial functions and weighted minimum and maximum functions, and present their characterizations, accordingly. Moreover, we discuss normal form representations of these functions

    Oblivious Bounds on the Probability of Boolean Functions

    Full text link
    This paper develops upper and lower bounds for the probability of Boolean functions by treating multiple occurrences of variables as independent and assigning them new individual probabilities. We call this approach dissociation and give an exact characterization of optimal oblivious bounds, i.e. when the new probabilities are chosen independent of the probabilities of all other variables. Our motivation comes from the weighted model counting problem (or, equivalently, the problem of computing the probability of a Boolean function), which is #P-hard in general. By performing several dissociations, one can transform a Boolean formula whose probability is difficult to compute, into one whose probability is easy to compute, and which is guaranteed to provide an upper or lower bound on the probability of the original formula by choosing appropriate probabilities for the dissociated variables. Our new bounds shed light on the connection between previous relaxation-based and model-based approximations and unify them as concrete choices in a larger design space. We also show how our theory allows a standard relational database management system (DBMS) to both upper and lower bound hard probabilistic queries in guaranteed polynomial time.Comment: 34 pages, 14 figures, supersedes: http://arxiv.org/abs/1105.281

    Models of Speed Discrimination

    Get PDF
    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks

    Axiomatizations of quasi-polynomial functions on bounded chains

    Get PDF
    Two emergent properties in aggregation theory are investigated, namely horizontal maxitivity and comonotonic maxitivity (as well as their dual counterparts) which are commonly defined by means of certain functional equations. We completely describe the function classes axiomatized by each of these properties, up to weak versions of monotonicity in the cases of horizontal maxitivity and minitivity. While studying the classes axiomatized by combinations of these properties, we introduce the concept of quasi-polynomial function which appears as a natural extension of the well-established notion of polynomial function. We give further axiomatizations for this class both in terms of functional equations and natural relaxations of homogeneity and median decomposability. As noteworthy particular cases, we investigate those subclasses of quasi-term functions and quasi-weighted maximum and minimum functions, and provide characterizations accordingly

    Communication Complexity of Distributed Statistical Algorithms

    Get PDF
    This paper constructs bounds on the minimax risk under loss functions when statistical estimation is performed in a distributed environment and with communication constraints. We treat this problem using techniques from information theory and communication complexity. In many cases our bounds rely crucially on metric entropy conditions and the classical reduction from estimation to testing. A number of examples exhibit how bounds on the minimax risk play out in practice. We also study distributed statistical estimation problems in the context of PAC-learnability and derive explicit algorithms for solving classical problems. We study the communication complexity of these algorithms

    Algorithmic Polynomials

    Full text link
    The approximate degree of a Boolean function f(x1,x2,…,xn)f(x_{1},x_{2},\ldots,x_{n}) is the minimum degree of a real polynomial that approximates ff pointwise within 1/31/3. Upper bounds on approximate degree have a variety of applications in learning theory, differential privacy, and algorithm design in general. Nearly all known upper bounds on approximate degree arise in an existential manner from bounds on quantum query complexity. We develop a first-principles, classical approach to the polynomial approximation of Boolean functions. We use it to give the first constructive upper bounds on the approximate degree of several fundamental problems: - O(n34−14(2k−1))O\bigl(n^{\frac{3}{4}-\frac{1}{4(2^{k}-1)}}\bigr) for the kk-element distinctness problem; - O(n1−1k+1)O(n^{1-\frac{1}{k+1}}) for the kk-subset sum problem; - O(n1−1k+1)O(n^{1-\frac{1}{k+1}}) for any kk-DNF or kk-CNF formula; - O(n3/4)O(n^{3/4}) for the surjectivity problem. In all cases, we obtain explicit, closed-form approximating polynomials that are unrelated to the quantum arguments from previous work. Our first three results match the bounds from quantum query complexity. Our fourth result improves polynomially on the Θ(n)\Theta(n) quantum query complexity of the problem and refutes the conjecture by several experts that surjectivity has approximate degree Ω(n)\Omega(n). In particular, we exhibit the first natural problem with a polynomial gap between approximate degree and quantum query complexity

    RPA:Learning Interpretable Input-Output Relationships by Counting Samples

    Get PDF
    This work proposes a fast solution algorithm to a fundamental data science problem, namely to identify Boolean rules in disjunctive normal form (DNF) that classify samples based on binary features. The algorithm is an explainable machine learning method: it provides an explicit input-output relationship. It is based on hypothesis tests through confidence intervals, where the used test statistic requires nothing more than counting the number of cases and the number of controls that possess a certain feature or a set of features, reflecting the potential AND clauses of the Boolean phrase. Extensive experiments on simulated data demonstrate the algorithm’s effectivity and efficiency. The efficiency of the algorithm relies on the fact that the bottleneck operation is a matrix multiplication of the input matrix with itself. More than only a solution algorithm, this paper offers a flexible and transparent theoretical framework with a statistical analysis of the problem and many entry points for future adjustments and improvements. Among other things, this framework allows one to assess the feasibility of identifying the input-output relationships given certain easily-obtained characteristics of the data
    • …
    corecore