124 research outputs found

    Sound Change Across Speech Islands: the Diphthong /aɪ/ in Two Midwestern Pennsylvania German Communities

    Get PDF
    This paper analyzes the variable production of the Pennsylvania German diphthong /aɪ/ in two Pennsylvania German speech islands in Iowa and Ohio. The data show that younger speakers regularly monophthongize /aɪ/, yielding [ɛː] or even (in Ohio only) [eː], and perceptual studies show that the latter form merges with the vowel space of the phoneme /eː/. This sound change is shown to be an example of language drift (i.e., internally motivated), though its spread across distant speech islands is suggestive of significant ongoing patterns of interaction between these speech islands

    Prior Pidginization and Creolization in Moroccan Arabic

    Get PDF
    This thesis makes a claim about the processes of prior pidginization and creolization, and a process of current decreolization in Moroccan Arabic (a colloquial dialect of Arabic spoken in Morocco). The claim of this thesis is based on the theory of pidginization and creolization in Arabic as posited by Versteegh (1984). A case-study is built for the aforementioned processes having occurred in Moroccan Arabic through fulfillment of Southworth’s (1971) two principles for determining the credibility of a pidginization and/or creolization claim: (1) That the required socio-linguistic frameworks are in place, and (2) that the linguistic effects of such processes are evident. Moroccan Arabic is analyzed alongside other languages that have undergone the processes of pidginization and creolization in its socio-diglossic history as well as in the linguistic features that are common to most pidgin and creole languages (e.g. transformed TMA system, SVO word order, analytic genitive, periphrastic interrogative, indefinite article). The conclusions drawn upon by the data presented in this thesis is that claims for the processes of prior pidginization and creolization, and the current process of decreolization in Moroccan Arabic are substantiated

    Know your audience: specializing grounded language models with listener subtraction

    Full text link
    Effective communication requires adapting to the idiosyncrasies of each communicative context--such as the common ground shared with each partner. Humans demonstrate this ability to specialize to their audience in many contexts, such as the popular game Dixit. We take inspiration from Dixit to formulate a multi-agent image reference game where a (trained) speaker model is rewarded for describing a target image such that one (pretrained) listener model can correctly identify it among distractors, but another listener cannot. To adapt, the speaker must exploit differences in the knowledge it shares with the different listeners. We show that finetuning an attention-based adapter between a CLIP vision encoder and a large language model in this contrastive, multi-agent setting gives rise to context-dependent natural language specialization from rewards only, without direct supervision. Through controlled experiments, we show that training a speaker with two listeners that perceive differently, using our method, allows the speaker to adapt to the idiosyncracies of the listeners. Furthermore, we show zero-shot transfer of the specialization to real-world data. Our experiments demonstrate a method for specializing grounded language models without direct supervision and highlight the interesting research challenges posed by complex multi-agent communication.Comment: 28 pages, 9 figure

    Know your audience: specializing grounded language models with listener subtraction

    Get PDF
    Effective communication requires adapting to the idiosyncrasies of each communicative context—such as the common ground shared with each partner. Humans demonstrate this ability to specialize to their audience in many contexts, such as the popular game Dixit. We take inspiration from Dixit to formulate a multiagent image reference game where a (trained) speaker model is rewarded for describing a target image such that one (pretrained) listener model can correctly identify it among distractors, but another listener cannot. To adapt, the speaker must exploit differences in the knowledge it shares with the different listeners. We show that finetuning an attention-based adapter between a CLIP vision encoder and a large language model in this contrastive, multi-agent setting gives rise to context-dependent natural language specialization from rewards only, without direct supervision. Through controlled experiments, we show that training a speaker with two listeners that perceive differently, using our method, allows the speaker to adapt to the idiosyncracies of the listeners. Furthermore, we show zero-shot transfer of the specialization to real-world data. Our experiments demonstrate a method for specializing grounded language models without direct supervision and highlight the interesting research challenges posed by complex multi-agent communicatio
    corecore