1,980 research outputs found

    Categorical invariance and structural complexity in human concept learning

    Get PDF
    An alternative account of human concept learning based on an invariance measure of the categorical\ud stimulus is proposed. The categorical invariance model (CIM) characterizes the degree of structural\ud complexity of a Boolean category as a function of its inherent degree of invariance and its cardinality or\ud size. To do this we introduce a mathematical framework based on the notion of a Boolean differential\ud operator on Boolean categories that generates the degrees of invariance (i.e., logical manifold) of the\ud category in respect to its dimensions. Using this framework, we propose that the structural complexity\ud of a Boolean category is indirectly proportional to its degree of categorical invariance and directly\ud proportional to its cardinality or size. Consequently, complexity and invariance notions are formally\ud unified to account for concept learning difficulty. Beyond developing the above unifying mathematical\ud framework, the CIM is significant in that: (1) it precisely predicts the key learning difficulty ordering of\ud the SHJ [Shepard, R. N., Hovland, C. L.,&Jenkins, H. M. (1961). Learning and memorization of classifications.\ud Psychological Monographs: General and Applied, 75(13), 1-42] Boolean category types consisting of three\ud binary dimensions and four positive examples; (2) it is, in general, a good quantitative predictor of the\ud degree of learning difficulty of a large class of categories (in particular, the 41 category types studied\ud by Feldman [Feldman, J. (2000). Minimization of Boolean complexity in human concept learning. Nature,\ud 407, 630-633]); (3) it is, in general, a good quantitative predictor of parity effects for this large class of\ud categories; (4) it does all of the above without free parameters; and (5) it is cognitively plausible (e.g.,\ud cognitively tractable)

    Learning Determinantal Point Processes

    Get PDF
    Determinantal point processes (DPPs), which arise in random matrix theory and quantum physics, are natural models for subset selection problems where diversity is preferred. Among many remarkable properties, DPPs offer tractable algorithms for exact inference, including computing marginal probabilities and sampling; however, an important open question has been how to learn a DPP from labeled training data. In this paper we propose a natural feature-based parameterization of conditional DPPs, and show how it leads to a convex and efficient learning formulation. We analyze the relationship between our model and binary Markov random fields with repulsive potentials, which are qualitatively similar but computationally intractable. Finally, we apply our approach to the task of extractive summarization, where the goal is to choose a small subset of sentences conveying the most important information from a set of documents. In this task there is a fundamental tradeoff between sentences that are highly relevant to the collection as a whole, and sentences that are diverse and not repetitive. Our parameterization allows us to naturally balance these two characteristics. We evaluate our system on data from the DUC 2003/04 multi-document summarization task, achieving state-of-the-art results

    A THEORY OF RATIONAL CHOICE UNDER COMPLETE IGNORANCE

    Get PDF
    This paper contributes to a theory of rational choice under uncertainty for decision-makers whose preferences are exhaustively described by partial orders representing ""limited information."" Specifically, we consider the limiting case of ""Complete Ignorance"" decision problems characterized by maximally incomplete preferences and important primarily as reduced forms of general decision problems under uncertainty. ""Rationality"" is conceptualized in terms of a ""Principle of Preference-Basedness,"" according to which rational choice should be isomorphic to asserted preference. The main result characterizes axiomatically a new choice-rule called ""Simultaneous Expected Utility Maximization"" which in particular satisfies a choice-functional independence and a context-dependent choice-consistency condition; it can be interpreted as the fair agreement in a bargaining game (Kalai-Smorodinsky solution) whose players correspond to the different possible states (respectively extermal priors in the general case).

    A Deductive Verification Framework for Circuit-building Quantum Programs

    Full text link
    While recent progress in quantum hardware open the door for significant speedup in certain key areas, quantum algorithms are still hard to implement right, and the validation of such quantum programs is a challenge. Early attempts either suffer from the lack of automation or parametrized reasoning, or target high-level abstract algorithm description languages far from the current de facto consensus of circuit-building quantum programming languages. As a consequence, no significant quantum algorithm implementation has been currently verified in a scale-invariant manner. We propose Qbricks, the first formal verification environment for circuit-building quantum programs, featuring clear separation between code and proof, parametric specifications and proofs, high degree of proof automation and allowing to encode quantum programs in a natural way, i.e. close to textbook style. Qbricks builds on best practice of formal verification for the classical case and tailor them to the quantum case: we bring a new domain-specific circuit-building language for quantum programs, namely Qbricks-DSL, together with a new logical specification language Qbricks-Spec and a dedicated Hoare-style deductive verification rule named Hybrid Quantum Hoare Logic. Especially, we introduce and intensively build upon HOPS, a higher-order extension of the recent path-sum symbolic representation, used for both specification and automation. To illustrate the opportunity of Qbricks, we implement the first verified parametric implementations of several famous and non-trivial quantum algorithms, including the quantum part of Shor integer factoring (Order Finding - Shor-OF), quantum phase estimation (QPE) - a basic building block of many quantum algorithms, and Grover search. These breakthroughs were amply facilitated by the specification and automated deduction principles introduced within Qbricks

    Power-law distributions in empirical data

    Full text link
    Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution -- the part of the distribution representing large but rare events -- and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data while in others the power law is ruled out.Comment: 43 pages, 11 figures, 7 tables, 4 appendices; code available at http://www.santafe.edu/~aaronc/powerlaws

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl
    corecore