244 research outputs found

    Tensors and compositionality in neural systems

    Get PDF
    Neither neurobiological nor process models of meaning composition specify the operator through which constituent parts are bound together into compositional structures. In this paper, we argue that a neurophysiological computation system cannot achieve the compositionality exhibited in human thought and language if it were to rely on a multiplicative operator to perform binding, as the tensor product (TP)-based systems that have been widely adopted in cognitive science, neuroscience and artificial intelligence do. We show via simulation and two behavioural experiments that TPs violate variable-value independence, but human behaviour does not. Specifically, TPs fail to capture that in the statements fuzzy cactus and fuzzy penguin, both cactus and penguin are predicated by fuzzy(x) and belong to the set of fuzzy things, rendering these arguments similar to each other. Consistent with that thesis, people judged arguments that shared the same role to be similar, even when those arguments themselves (e.g., cacti and penguins) were judged to be dissimilar when in isolation. By contrast, the similarity of the TPs representing fuzzy(cactus) and fuzzy(penguin) was determined by the similarity of the arguments, which in this case approaches zero. Based on these results, we argue that neural systems that use TPs for binding cannot approximate how the human mind and brain represent compositional information during processing. We describe a contrasting binding mechanism that any physiological or artificial neural system could use to maintain independence between a role and its argument, a prerequisite for compositionality and, thus, for instantiating the expressive power of human thought and language in a neural system

    A Note on Intensionalization

    Full text link

    The construction of viewpoint aspect: the imperfective revisited

    Get PDF
    This paper argues for a constructionist approach to viewpoint Aspect by exploring the idea that it does not exert any altering force on the situation-aspect properties of predicates. The proposal is developed by analyzing the syntax and semantics of the imperfective, which has been attributed a coercer role in the literature as a de-telicizer and de-stativizer in the progressive, and as a de-eventivizer in the so-called ability (or attitudinal) and habitual readings. This paper proposes a unified semantics for the imperfective, preserving the properties of eventualities throughout the derivation. The paper argues that the semantics of viewpoint aspect is encoded in a series of functional heads containing interval-ordering predicates and quantifiers. This richer structure allows us to account for a greater amount of phenomena, such as the perfective nature of the individual instantiations of the event within a habitual construction or the nonculminating reading of perfective accomplishments in Spanish. This paper hypothesizes that nonculminating accomplishments have an underlying structure corresponding to the perfective progressive. As a consequence, the progressive becomes disentangled from imperfectivity and is given a novel analysis. The proposed syntax is argued to have a corresponding explicit morphology in languages such as Spanish and a nondifferentiating one in languages such as English; however, the syntax-semantics underlying both of these languages is argued to be the same

    Beyond word frequency: Bursts, lulls, and scaling in the temporal distributions of words

    Get PDF
    Background: Zipf's discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well. Methodology/Principal Findings: By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type -- a measure of the logicality of each word -- and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage. Conclusions/Significance: Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf's law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics

    Composition is the Core Driver of the Language-selective Network

    Get PDF
    • …
    corecore