2,933 research outputs found

    Topological Fluctuations in Dense Matter with Two Colors

    Full text link
    We study the topological charge fluctuations of an SU(2) lattice gauge theory containing both N_f=2 and 4 flavors of Wilson fermion, at low temperature with non-zero chemical potential μ\mu. The topological susceptibility, chi_T, is used to characterize differing physical regimes as mu is varied between the onset of matter at mu_o and and color deconfinement at mu_d. Suppression of instantons by matter via Debye screening is also investigated, revealing effects not captured by perturbative predictions. In particular, the breaking of scale invariance leads to the mean instanton size rho becoming mu-dependent in the regime between onset and deconfinement, with a scaling rho~1/mu^2 over the range mu_o<mu<mu_d, resulting in an enhancement of chi_T immediately above onset.Comment: 12 pages, 7 figure

    Languages adapt to their contextual niche

    Get PDF
    abstractIt is well established that context plays a fundamental role in how we learn and use language. Here we explore how context links short-term language use with the long-term emergence of different types of language system. Using an iterated learning model of cultural transmission, the current study experimentally investigates the role of the communicative situation in which an utterance is produced (situational context) and how it influences the emergence of three types of linguistic systems: underspecified languages (where only some dimensions of meaning are encoded linguistically), holistic systems (lacking systematic structure), and systematic languages (consisting of compound signals encoding both category-level and individuating dimensions of meaning). To do this, we set up a discrimination task in a communication game and manipulated whether the feature dimension shape was relevant or not in discriminating between two referents. The experimental languages gradually evolved to encode information relevant to the task of achieving communicative success, given the situational context in which they are learned and used, resulting in the emergence of different linguistic systems. These results suggest language systems adapt to their contextual niche over iterated learning.</jats:p

    Introduction

    Get PDF

    Introduction

    Get PDF

    Regioswitchable palladium-catalyzed decarboxylative coupling of 1,3-dicarbonyl compounds

    Get PDF
    A palladium-catalyzed chemo- and regioselective coupling of 1,3-dicarbonyl compounds via an allylic linker has been developed. This reaction, which displays broad substrate scope, forms two C−C bonds and installs two all-carbon quaternary centers. The regioselectivity of the reaction can be predictably controlled by utilizing an enol carbonate of one of the coupling partners

    Is regularization uniform across linguistic levels? Comparing learning and production of unconditioned probabilistic variation in morphology and word order

    Get PDF
    Languages exhibit variation at all linguistic levels, from phonology, to the lexicon, to syntax. Importantly, that variation tends to be (at least partially) conditioned on some aspect of the social or linguistic context. When variation is unconditioned, language learners regularize it – removing some or all variants, or conditioning variant use on context. Previous studies using artificial language learning experiments have documented regularizing behavior in the learning of lexical, morphological, and syntactic variation. These studies implicitly assume that regularization reflects uniform mechanisms and processes across linguistic levels. However, studies on natural language learning and pidgin/creole formation suggest that morphological and syntactic variation may be treated differently. In particular, there is evidence that morphological variation may be more susceptible to regularization. Here we provide the first systematic comparison of the strength of regularization across these two linguistic levels. In line with previous studies, we find that the presence of a favored variant can induce different degrees of regularization. However, when input languages are carefully matched – with comparable initial variability, and no variant-specific biases – regularization can be comparable across morphology and word order. This is the case regardless of whether the task is explicitly communicative. Overall, our findings suggest an overarching regularizing mechanism at work, with apparent differences among levels likely due to differences in inherent complexity or variant-specific biases. Differences between production and encoding in our tasks further suggest this overarching mechanism is driven by production

    Compression and communication in the cultural evolution of linguistic structure

    Get PDF
    Language exhibits striking systematic structure. Words are composed of combinations of reusable sounds, and those words in turn are combined to form complex sentences. These properties make language unique among natural communication systems and enable our species to convey an open-ended set of messages. We provide a cultural evolutionary account of the origins of this structure. We show, using simulations of rational learners and laboratory experiments, that structure arises from a trade-off between pressures for compressibility (imposed during learning) and expressivity (imposed during communication). We further demonstrate that the relative strength of these two pressures can be varied in different social contexts, leading to novel predictions about the emergence of structured behaviour in the wild

    Simplicity and informativeness in semantic category systems

    Get PDF
    Recent research has shown that semantic category systems, such as color and kinship terms, find an optimal balance between simplicity and informativeness. We argue that this situation arises through pressure for simplicity from learning and pressure for informativeness from communicative interaction, two distinct pressures that often (but not always) pull in opposite directions. Another account argues that learning might also act as a pressure for informativeness, that learners might be biased toward inferring informative systems. This results in two competing hypotheses about the human inductive bias. We formalize these competing hypotheses in a Bayesian iterated learning model in order to simulate what kinds of languages are expected to emerge under each. We then test this model experimentally to investigate whether learners' biases, isolated from any communicative task, are better characterized as favoring simplicity or informativeness. We find strong evidence to support the simplicity account. Furthermore, we show how the application of a simplicity principle in learning can give the impression of a bias for informativeness, even when no such bias is present. Our findings suggest that semantic categories are learned through domain-general principles, negating the need to posit a domain-specific mechanism
    corecore