811 research outputs found

    How adults and children interpret disjunction under negation in Dutch, French, Hungarian and Italian:A cross-linguistic comparison

    Get PDF
    In English, a sentence like “The cat didn’t eat the carrot or the pepper” typically receives a “neither” interpretation; in Japanese it receives a “not this or not that” interpretation. These two interpretations are in a subset/superset relation, such that the “neither” interpretation (strong reading) asymmetrically entails the “not this or not that” interpretation (weak reading). This asymmetrical entailment raises a learnability problem. According to the Semantic Subset Principle, all language learners, regardless of the language they are exposed to, start by assigning the strong reading, since this interpretation makes such sentences true in the narrowest range of circumstances.). If the “neither” interpretation is children’s initial hypothesis, then children acquiring a superset language will be able to revise their initial hypothesis on the basis of positive evidence. The aim of the present study is to test an additional account proposed by Pagliarini, Crain, Guasti (2018) as a possible explanation for the earlier convergence to the adult grammar by Italian children. The hypothesis tested here is that the presence of a lexical form such as recursive nĂ© that unambiguously conveys a “neither” meaning, would lead children to converge earlier to the adult grammar due to a blocking e!ect of the recursive nĂ© form in the inventory of negated disjunction forms in a language. We compared data from Italian (taken from Pagliarini, Crain, Guasti, 2018), French, Hungarian and Dutch. Dutch was tested as baseline language. French and Hungarian have – similarly to Italian – a lexical form that unambiguously expresses the “neither” interpretation (ni ni and sem sem, respectively). Our results did not support this hypothesis however, and are discussed in the light of language-specifc particularities of the syntax and semantics of negation

    Context, content, and the occasional costs of implicature computation

    Get PDF
    The computation of scalar implicatures is sometimes costly relative to basic meanings. Among the costly computations are those that involve strengthening “some” to “not all” and strengthening inclusive disjunction to exclusive disjunction. The opposite is true for some other cases of strengthening, where the strengthened meaning is less costly than its corresponding basic meaning. These include conjunctive strengthenings of disjunctive sentences (e.g., free-choice inferences) and exactly-readings of numerals. Assuming that these are indeed all instances of strengthening via implicature/exhaustification, the puzzle is to explain why strengthening sometimes increases costs while at other times it decreases costs. I develop a theory of processing costs that makes no reference to the strengthening mechanism or to other aspects of the derivation of the sentence’s form/meaning. Instead, costs are determined by domain-general considerations of the grammar’s output, and in particular by aspects of the meanings of ambiguous sentences and particular ways they update the context. Specifically, I propose that when the hearer has to disambiguate between a sentence’s basic and strengthened meaning, the processing cost of any particular choice is a function of (i) a measure of the semantic complexity of the chosen meaning and (ii) a measure of how much relevant uncertainty it leaves behind in the context. I measure semantic complexity with Boolean Complexity in the propositional case and with semantic automata in the quantificational case, both of which give a domain-general measure of the minimal representational complexity needed to express the given meaning. I measure relevant uncertainty with the information-theoretic notion of entropy; this domain-general measure formalizes how ‘far’ the meaning is from giving a complete answer to the question under discussion, and hence gives an indication of how much representational complexity is yet to come. Processing costs thus follow from domain-general considerations of current and anticipated representational complexity. The results might also speak to functional motivations for having strengthening mechanisms in the first place. Specifically, exhaustification allows language users to use simpler forms than would be available without it to bot

    Homogeneity or implicature: An experimental investigation of free choice

    Get PDF
    A sentence containing disjunction in the scope of a possibility modal, such as Angie is allowed to buy the boat or the car, gives rise to the FREE CHOICE inference that Angie can freely choose between the two. This inference poses a well-known puzzle, in that it is not predicted by a standard treatment of modals and disjunction (e.g., Kamp 1974). To complicate things further, FREE CHOICE tends to disappear under negation: Angie is not allowed to buy the boat or the car doesn't merely convey the negation of free choice, but rather the stronger DUAL PROHIBITION reading that Angie cannot buy either one. There are two main approaches to the FREE CHOICE-DUAL PROHIBITION pattern in the literature. While they both capture the relevant data points, they make a testable, divergent prediction regarding the status of positive and negative sentences in a context in which Angie can only buy one of the two objects, e.g., the boat. In particular, the implicature-based approach (e.g., Fox 2007; Klinedinst 2007; Bar-Lev & Fox 2017) predicts that the positive sentence is true in such a context, but associated with a false implicature, while it predicts the  negative sentence to be straightforwardly false. The alternative approach (e.g., Aloni 2018; Goldstein 2018; Willer 2017) predicts both the positive and negative sentences to be equally undefined. Investigating the contrast between these sentences in such a context therefore provides a clear way to address the debate between implicature and non-implicature accounts of FREE CHOICE. We present an experiment aiming to do just this, the results of which present a challenge for the implicature approach. We further discuss how the implicature approach could in theory be developed to account for our results, based on a recent proposal by Enguehard & Chemla (2018) on the distribution of  implicatures

    Testing theories of temporal inferences: Evidence from child language

    Get PDF
    Sentences involving past tense verbs, such as “My dogs were on the carpet”, tend to give rise to the inference that the corresponding present tense version, “My dogs are on the carpet”, is false. This inference is often referred to as a 'cessation 'or 'temporal 'inference, and is generally analyzed as a type of implicature. There are two main proposals for capturing this asymmetry: one assumes a difference in informativity between the past and present counterparts (Altshuler & Schwarzschild 2013), while the other proposes a structural difference between the two (Thomas 2012). The two approaches are similar in terms of empirical coverage, but differ in their predictions for language acquisition. Using a novel animated picture selection paradigm, we investigated these predictions. Specifically, we compared the performance of a group of 4–6-year-old children and a group of adults on temporal inferences, scalar implicatures arising from “some”, and inferences of adverbial modifiers under negation. The results revealed that overall, children computed all three inferences at a lower rate than adult controls; however they were more adult-like on temporal inferences and inferences of adverbial modifiers than on scalar implicatures. We discuss the implications of the findings, both for a developmental alternatives-based hypothesis (e.g., Barner et al. 2011; Singh et al. 2016; Tieu et al. 2016; 2018), as well as theories of temporal inferences, arguing that the finding that children were more (and equally) adult-like on temporal inferences and adverbial modifiers supports a structural theory of temporal inferences along the lines of Thomas (2012)
    • 

    corecore