23 research outputs found

    Foto- und Film-Skulpturen von Klaus Kammerichs

    Get PDF
    Die Foto-Skulpturen von Klaus Kammerichs haben seit ihren ersten Ausstellungen in den frühen siebziger Jahren immer wieder Verblüffung und Irritation hervorgerufen. Zwischen zwei vermeintlich verschiedenen Medien stehend, dem der realistisch abbildenden Fotografie und dem des dreidimensionalen, abstrakten Objektes, schienen sie sich traditioneller kunsthistorischer Kategorisierung zu entziehen, so daß sie nur zögernd in die großen Kunstsammlungen Eingang fanden. „Fotografie“ fiel nicht unter den klassischen „Kunstbegriff“, eine Einstellung, die inzwischen grundlegend revidiert wurde. Doch bis heute ist Klaus Kammerichs eher in Nachschlagewerken über Fotografie als in „Künstler“-Lexika zu finden, obwohl er als „Bildhauer“ internationalen Bekanntheitsgrad hat: Seine Skulpturen stehen auf öffentlichen Plätzen in Düsseldorf, Bonn, Yokohama, Bad Segeberg und Gifu (Japan), wo sie zum Teil den Charakter von Wahrzeichen und „landmarks“ entwickelt haben. Die Aufarbeitung der beiden Werkgruppen „Foto- und Film-Skulpturen“ von Klaus Kammerichs wird nicht mit einer generellen Fragestellung begonnen, die an den Objekten untersucht wird. Hier geht es vorerst um die Analyse der einzelnen Exponate, die jeweils einen autonomen Inhalt aufweisen. Deswegen geben also die Objekte den Inhalt der Magisterarbeit vor und nicht eine Fragestellung, für welche die Skulpturen instrumentalisiert werden. Eine Arbeitshypothese läßt sich dennoch formulieren: Der Darstellungsmodus der Foto- und Film-Skulpturen bleibt mehr oder weniger immer der gleiche, es wechselt nur das Motiv. Jedoch besteht keine der Skulpturen nur aus ihrem Darstellungsmodus, sondern gerade in der Interaktion von Darstellungsmodus und Motiv. Sobald Das Motiv wechselt, entfaltet der Darstellungsmodus wieder ganz neue Bedeutungen, die auf keine andere Skulptur zu übertragen sind. Diese Bedeutungsvielfalt bei den Foto- und Film-Skulpturen soll in dieser Magisterarbeit untersucht werden. Gleichzeitig gibt es charakteristische Eigenschaften, die auf alle Skulpturen oder zumindest auf einige zu übertragen sind. Diese sollen im Laufe der Magisterarbeit herausgestellt werden. Es geht also um die Charakterisierung von Differenzen in den stereotypischen Systemen der Foto- und Film-Skulpturen bei gleichzeitiger Klassifizierung der stereotypischen Systeme selber. Begonnen wird mit dem Stand der Forschung. Die Darstellung des Forschungsstandes dient der thematischen Abgrenzung und der Entwicklung neuer Fragestellungen. Beiden großen Werkgruppen wird je ein eigenständiges Kapitel gewidmet. In beiden Kapiteln wird der Besprechung der Exponate ein chronologischer Überblick über die Werkgruppe vorweggeschickt, um eine Einbettung zu gewährleisten. Das Kapitel der Foto-Skulpturen beginnt mit der Explikation ihrer Konzeptionierung. Es folgt die Besprechung zweier subkategorischer Werkgruppen, der Portraits und der Maschinen bzw. readymades. Im Kapitel über die Film-Skulpturen wird zuerst exemplarisch die Skulptur „Ascending-Descending“ analysiert. Aus dieser Beschreibung heraus wird die Konzeption der Film-Skulpturen entwikkelt, z.B. der spezifische Modus, in dem sie produziert wurden. Nach diesem allgemeinen Teil geht es um die diversen Variationen der Film-Skulpturen. Es folgt die Explikation einzelner Aspekte, die auf alle Film-Skulpturen zu übertragen sind, z.B. ihre kunsthistorische Kontextualisierung..

    Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs

    Full text link
    In comparative linguistics, colexification refers to the phenomenon of a lexical form conveying two or more distinct meanings. Existing work on colexification patterns relies on annotated word lists, limiting scalability and usefulness in NLP. In contrast, we identify colexification patterns of more than 2,000 concepts across 1,335 languages directly from an unannotated parallel corpus. We then propose simple and effective methods to build multilingual graphs from the colexification patterns: ColexNet and ColexNet+. ColexNet's nodes are concepts and its edges are colexifications. In ColexNet+, concept nodes are additionally linked through intermediate nodes, each representing an ngram in one of 1,334 languages. We use ColexNet+ to train \overrightarrow{\mbox{ColexNet+}}, high-quality multilingual embeddings that are well-suited for transfer learning. In our experiments, we first show that ColexNet achieves high recall on CLICS, a dataset of crosslingual colexifications. We then evaluate \overrightarrow{\mbox{ColexNet+}} on roundtrip translation, sentence retrieval and sentence classification and show that our embeddings surpass several transfer learning baselines. This demonstrates the benefits of using colexification as a source of information in multilingual NLP.Comment: EMNLP 2023 Finding

    How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives

    Full text link
    Recently, various intermediate layer distillation (ILD) objectives have been shown to improve compression of BERT models via Knowledge Distillation (KD). However, a comprehensive evaluation of the objectives in both task-specific and task-agnostic settings is lacking. To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings. We show that attention transfer gives the best performance overall. We also study the impact of layer choice when initializing the student from the teacher layers, finding a significant impact on the performance in task-specific distillation. For vanilla KD and hidden states transfer, initialisation with lower layers of the teacher gives a considerable improvement over higher layers, especially on the task of QNLI (up to an absolute percentage change of 17.8 in accuracy). Attention transfer behaves consistently under different initialisation settings. We release our code as an efficient transformer-based model distillation framework for further studies.Comment: ACL 202

    Explaining pretrained language models' understanding of linguistic structures using construction grammar

    Get PDF
    Construction Grammar (CxG) is a paradigm from cognitive linguistics emphasizing the connection between syntax and semantics. Rather than rules that operate on lexical items, it posits constructions as the central building blocks of language, i.e., linguistic units of different granularity that combine syntax and semantics. As a first step toward assessing the compatibility of CxG with the syntactic and semantic knowledge demonstrated by state-of-the-art pretrained language models (PLMs), we present an investigation of their capability to classify and understand one of the most commonly studied constructions, the English comparative correlative (CC). We conduct experiments examining the classification accuracy of a syntactic probe on the one hand and the models' behavior in a semantic application task on the other, with BERT, RoBERTa, and DeBERTa as the example PLMs. Our results show that all three investigated PLMs, as well as OPT, are able to recognize the structure of the CC but fail to use its meaning. While human-like performance of PLMs on many NLP tasks has been alleged, this indicates that PLMs still suffer from substantial shortcomings in central domains of linguistic knowledge

    A Crosslingual Investigation of Conceptualization in 1335 Languages

    Full text link
    Languages differ in how they divide up the world into concepts and words; e.g., in contrast to English, Swahili has a single concept for `belly' and `womb'. We investigate these differences in conceptualization across 1,335 languages by aligning concepts in a parallel corpus. To this end, we propose Conceptualizer, a method that creates a bipartite directed alignment graph between source language concepts and sets of target language strings. In a detailed linguistic analysis across all languages for one concept (`bird') and an evaluation on gold standard data for 32 Swadesh concepts, we show that Conceptualizer has good alignment accuracy. We demonstrate the potential of research on conceptualization in NLP with two experiments. (1) We define crosslingual stability of a concept as the degree to which it has 1-1 correspondences across languages, and show that concreteness predicts stability. (2) We represent each language by its conceptualization pattern for 83 concepts, and define a similarity measure on these representations. The resulting measure for the conceptual similarity of two languages is complementary to standard genealogical, typological, and surface similarity measures. For four out of six language families, we can assign languages to their correct family based on conceptual similarity with accuracy between 54% and 87%.Comment: ACL 202

    The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative Correlative

    Get PDF
    Construction Grammar (CxG) is a paradigm from cognitive linguistics emphasising the connection between syntax and semantics. Rather than rules that operate on lexical items, it posits constructions as the central building blocks of language, i.e., linguistic units of different granularity that combine syntax and semantics. As a first step towards assessing the compatibility of CxG with the syntactic and semantic knowledge demonstrated by state-of-the-art pretrained language models (PLMs), we present an investigation of their capability to classify and understand one of the most commonly studied constructions, the English comparative correlative (CC). We conduct experiments examining the classification accuracy of a syntactic probe on the one hand and the models’ behaviour in a semantic application task on the other, with BERT, RoBERTa, and DeBERTa as the example PLMs. Our results show that all three investigated PLMs are able to recognise the structure of the CC but fail to use its meaning. While human-like performance of PLMs on many NLP tasks has been alleged, this indicates that PLMs still suffer from substantial shortcomings in central domains of linguistic knowledge

    CaMEL: Case Marker Extraction without Labels

    Get PDF
    We introduce CaMEL (Case Marker Extraction without Labels), a novel and challenging task in computational morphology that is especially relevant for low-resource languages. We propose a first model for CaMEL that uses a massively multilingual corpus to extract case markers in 83 languages based only on a noun phrase chunker and an alignment system. To evaluate CaMEL, we automatically construct a silver standard from UniMorph. The case markers extracted by our model can be used to detect and visualise similarities and differences between the case systems of different languages as well as to annotate fine-grained deep cases in languages in which they are not overtly marked

    A Crosslingual Investigation of Conceptualization in 1335 Languages

    Get PDF
    Languages differ in how they divide up the world into concepts and words; e.g., in contrast to English, Swahili has a single concept for ‘belly’ and ‘womb’. We investigate these differences in conceptualization across 1,335 languages by aligning concepts in a parallel corpus. To this end, we propose Conceptualizer, a method that creates a bipartite directed alignment graph between source language concepts and sets of target language strings. In a detailed linguistic analysis across all languages for one concept (‘bird’) and an evaluation on gold standard data for 32 Swadesh concepts, we show that Conceptualizer has good alignment accuracy. We demonstrate the potential of research on conceptualization in NLP with two experiments. (1) We define crosslingual stability of a concept as the degree to which it has 1-1 correspondences across languages, and show that concreteness predicts stability. (2) We represent each language by its conceptualization pattern for 83 concepts, and define a similarity measure on these representations. The resulting measure for the conceptual similarity between two languages is complementary to standard genealogical, typological, and surface similarity measures. For four out of six language families, we can assign languages to their correct family based on conceptual similarity with accuracies between 54% and 87

    Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model

    Full text link
    Large language models (LLMs) have recently reached an impressive level of linguistic capability, prompting comparisons with human language skills. However, there have been relatively few systematic inquiries into the linguistic capabilities of the latest generation of LLMs, and those studies that do exist (i) ignore the remarkable ability of humans to generalize, (ii) focus only on English, and (iii) investigate syntax or semantics and overlook other capabilities that lie at the heart of human language, like morphology. Here, we close these gaps by conducting the first rigorous analysis of the morphological capabilities of ChatGPT in four typologically varied languages (specifically, English, German, Tamil, and Turkish). We apply a version of Berko's (1958) wug test to ChatGPT, using novel, uncontaminated datasets for the four examined languages. We find that ChatGPT massively underperforms purpose-built systems, particularly in English. Overall, our results -- through the lens of morphology -- cast a new light on the linguistic capabilities of ChatGPT, suggesting that claims of human-like language skills are premature and misleading.Comment: EMNLP 202
    corecore