43,678 research outputs found

    The positive side of a negative reference: the delay between linguistic processing and common ground

    Get PDF
    Interlocutors converge on names to refer to entities. For example, a speaker might refer to a novel looking object as the jellyfish and, once identified, the listener will too. The hypothesized mechanism behind such referential precedents is a subject of debate. The common ground view claims that listeners register the object as well as the identity of the speaker who coined the label. The linguistic view claims that, once established, precedents are treated by listeners like any other linguistic unit, i.e. without needing to keep track of the speaker. To test predictions from each account, we used visual-world eyetracking, which allows observations in real time, during a standard referential communication task. Participants had to select objects based on instructions from two speakers. In the critical condition, listeners sought an object with a negative reference such as not the jellyfish. We aimed to determine the extent to which listeners rely on the linguistic input, common ground or both. We found that initial interpretations were based on linguistic processing only and that common ground considerations do emerge but only after 1000 ms. Our findings support the idea that-at least temporally-linguistic processing can be isolated from common ground

    Benchmarking in cluster analysis: A white paper

    Get PDF
    To achieve scientific progress in terms of building a cumulative body of knowledge, careful attention to benchmarking is of the utmost importance. This means that proposals of new methods of data pre-processing, new data-analytic techniques, and new methods of output post-processing, should be extensively and carefully compared with existing alternatives, and that existing methods should be subjected to neutral comparison studies. To date, benchmarking and recommendations for benchmarking have been frequently seen in the context of supervised learning. Unfortunately, there has been a dearth of guidelines for benchmarking in an unsupervised setting, with the area of clustering as an important subdomain. To address this problem, discussion is given to the theoretical conceptual underpinnings of benchmarking in the field of cluster analysis by means of simulated as well as empirical data. Subsequently, the practicalities of how to address benchmarking questions in clustering are dealt with, and foundational recommendations are made

    Abstracted navigational actions for improved hypermedia navigation and maintainance.

    Get PDF
    This paper discusses the MESH framework, which proposes a fully object-oriented approach to hypermedia. Object-oriented abstractions are not only applied to the conceptual data model, but also to the navigation paradigm. This results in the concept of context-based navigation, which reduces the end user's disorientation problem by means of dynamically generated, context-sensitive guided tours. Moreover, maintainability is greatly improved, as both nodes and links are defined as instances of abstract classes. I this way, single links and entire guided tours are anchored on type level as abstract navigational actions, which are independent of the actual link instances.Marketing; Data; Model;

    Replicode: A Constructivist Programming Paradigm and Language

    Get PDF
    Replicode is a language designed to encode short parallel programs and executable models, and is centered on the notions of extensive pattern-matching and dynamic code production. The language is domain independent and has been designed to build systems that are modelbased and model-driven, as production systems that can modify their own code. More over, Replicode supports the distribution of knowledge and computation across clusters of computing nodes. This document describes Replicode and its executive, i.e. the system that executes Replicode constructions. The Replicode executive is meant to run on Linux 64 bits and Windows 7 32/64 bits platforms and interoperate with custom C++ code. The motivations for the Replicode language, the constructivist paradigm it rests on, and the higher-level AI goals targeted by its construction, are described by Thórisson (2012), Nivel and Thórisson (2009), and Thórisson and Nivel (2009a, 2009b). An overview presents the main concepts of the language. Section 3 describes the general structure of Replicode objects and describes pattern matching. Section 4 describes the execution model of Replicode and section 5 describes how computation and knowledge are structured and controlled. Section 6 describes the high-level reasoning facilities offered by the system. Finally, section 7 describes how the computation is distributed over a cluster of computing nodes. Consult Annex 1 for a formal definition of Replicode, Annex 2 for a specification of the executive, Annex 3 for the specification of the executable code format (r-code) and its C++ API, and Annex 4 for the definition of the Replicode Extension C++ API

    A unified theory of granularity, vagueness and approximation

    Get PDF
    Abstract: We propose a view of vagueness as a semantic property of names and predicates. All entities are crisp, on this semantic view, but there are, for each vague name, multiple portions of reality that are equally good candidates for being its referent, and, for each vague predicate, multiple classes of objects that are equally good candidates for being its extension. We provide a new formulation of these ideas in terms of a theory of granular partitions. We show that this theory provides a general framework within which we can understand the relation between vague terms and concepts and the corresponding crisp portions of reality. We also sketch how it might be possible to formulate within this framework a theory of vagueness which dispenses with the notion of truth-value gaps and other artifacts of more familiar approaches. Central to our approach is the idea that judgments about reality involve in every case (1) a separation of reality into foreground and background of attention and (2) the feature of granularity. On this basis we attempt to show that even vague judgments made in naturally occurring contexts are not marked by truth-value indeterminacy. We distinguish, in addition to crisp granular partitions, also vague partitions, and reference partitions, and we explain the role of the latter in the context of judgments that involve vagueness. We conclude by showing how reference partitions provide an effective means by which judging subjects are able to temper the vagueness of their judgments by means of approximations

    Categorical invariance and structural complexity in human concept learning

    Get PDF
    An alternative account of human concept learning based on an invariance measure of the categorical\ud stimulus is proposed. The categorical invariance model (CIM) characterizes the degree of structural\ud complexity of a Boolean category as a function of its inherent degree of invariance and its cardinality or\ud size. To do this we introduce a mathematical framework based on the notion of a Boolean differential\ud operator on Boolean categories that generates the degrees of invariance (i.e., logical manifold) of the\ud category in respect to its dimensions. Using this framework, we propose that the structural complexity\ud of a Boolean category is indirectly proportional to its degree of categorical invariance and directly\ud proportional to its cardinality or size. Consequently, complexity and invariance notions are formally\ud unified to account for concept learning difficulty. Beyond developing the above unifying mathematical\ud framework, the CIM is significant in that: (1) it precisely predicts the key learning difficulty ordering of\ud the SHJ [Shepard, R. N., Hovland, C. L.,&Jenkins, H. M. (1961). Learning and memorization of classifications.\ud Psychological Monographs: General and Applied, 75(13), 1-42] Boolean category types consisting of three\ud binary dimensions and four positive examples; (2) it is, in general, a good quantitative predictor of the\ud degree of learning difficulty of a large class of categories (in particular, the 41 category types studied\ud by Feldman [Feldman, J. (2000). Minimization of Boolean complexity in human concept learning. Nature,\ud 407, 630-633]); (3) it is, in general, a good quantitative predictor of parity effects for this large class of\ud categories; (4) it does all of the above without free parameters; and (5) it is cognitively plausible (e.g.,\ud cognitively tractable)

    Why use or?

    Get PDF
    Or constructions introduce a set of alternatives into the discourse. But alternativity does not exhaust speakers' intended messages. Speakers use the profiled or alternatives as a starting point for expressing a variety of readings. Ever since (Grice, H. Paul. 1989. Studies in the way of words. Cambridge, MA: Harvard University Press) and (Horn. 1972. On the semantic properties of the logical operators in English. Los Angeles, CA: University of California Los Angeles dissertation), the standard approach has assumed that or has an inclusive lexical meaning and a predominantly exclusive use, thus focusing on two readings. While another, "free choice", reading has been added to the repertoire, accounting for the exclusive reading remains a goal all or theorists must meet. We here propose that both "inclusive" and "exclusive" interpretations, as currently defined, do not capture speakers' intended readings, which we equate with the relevance-theoretic explicature. Adopting a usage-based approach to language, we examined all the or occurrences in the Santa Barbara Corpus of spoken American English (1053 tokens), and found that speakers use or utterances for a far richer variety of readings than has been recognized. In line with Cognitive Linguistics, we propose that speakers' communicated intentions are better analyzed in terms of subjective construals, rather than the objective conditions obtaining when the or proposition is true. We argue that in two of these readings speakers are not necessarily committed to even one of the alternatives being the case. In the most frequent reading, the overt disjuncts only serve as pointers to a higher-level concept, and it is that concept that the speaker intends to refer to

    The 1900 Turn in Bertrand Russell’s Logic, the Emergence of his Paradox, and the Way Out

    Get PDF
    Russell’s initial project in philosophy (1898) was to make mathematics rigorous reducing it to logic. Before August 1900, however, Russell’s logic was nothing but mereology. First, his acquaintance with Peano’s ideas in August 1900 led him to discard the part-whole logic and accept a kind of intensional predicate logic instead. Among other things, the predicate logic helped Russell embrace a technique of treating the paradox of infinite numbers with the help of a singular concept, which he called ‘denoting phrase’. Unfortunately, a new paradox emerged soon: that of classes. The main contention of this paper is that Russell’s new conception only transferred the paradox of infinity from the realm of infinite numbers to that of class-inclusion. Russell’s long-elaborated solution to his paradox developed between 1905 and 1908 was nothing but to set aside of some of the ideas he adopted with his turn of August 1900: (i) With the Theory of Descriptions, he reintroduced the complexes we are acquainted with in logic. In this way, he partly restored the pre-August 1900 mereology of complexes and simples. (ii) The elimination of classes, with the help of the ‘substitutional theory’, and of propositions, by means of the Multiple Relation Theory of Judgment, completed this process

    Adversarial Attacks on Video Object Segmentation with Hard Region Discovery

    Full text link
    Video object segmentation has been applied to various computer vision tasks, such as video editing, autonomous driving, and human-robot interaction. However, the methods based on deep neural networks are vulnerable to adversarial examples, which are the inputs attacked by almost human-imperceptible perturbations, and the adversary (i.e., attacker) will fool the segmentation model to make incorrect pixel-level predictions. This will rise the security issues in highly-demanding tasks because small perturbations to the input video will result in potential attack risks. Though adversarial examples have been extensively used for classification, it is rarely studied in video object segmentation. Existing related methods in computer vision either require prior knowledge of categories or cannot be directly applied due to the special design for certain tasks, failing to consider the pixel-wise region attack. Hence, this work develops an object-agnostic adversary that has adversarial impacts on VOS by first-frame attacking via hard region discovery. Particularly, the gradients from the segmentation model are exploited to discover the easily confused region, in which it is difficult to identify the pixel-wise objects from the background in a frame. This provides a hardness map that helps to generate perturbations with a stronger adversarial power for attacking the first frame. Empirical studies on three benchmarks indicate that our attacker significantly degrades the performance of several state-of-the-art video object segmentation models
    • …
    corecore