276 research outputs found

    An evolutionary developmental approach to cultural evolution

    Get PDF
    Evolutionary developmental theories in biology see the processes and organization of organisms as crucial for understanding the dynamic behavior of organic evolution. Darwinian forces are seen as necessary but not sufficient for explaining observed evolutionary patterns. We here propose that the same arguments apply with even greater force to culture vis-Ă -vis cultural evolution. In order not to argue entirely in the abstract, we demonstrate the proposed approach by combining a set of different models into a provisional synthetic theory, and by applying this theory to a number of short case studies. What emerges is a set of concepts and models that allow us to consider entirely new types of explanations for the evolution of cultures. For example we see how feedback relations - both within societies and between societies and their ecological environment - have the power to shape evolutionary history in profound ways. The ambition here is not to produce a definite statement on what such a theory should look like but rather to propose a starting point along with an argumentation and demonstration of its potential

    How to use LLMs for Text Analysis

    Full text link
    This guide introduces Large Language Models (LLM) as a highly versatile text analysis method within the social sciences. As LLMs are easy-to-use, cheap, fast, and applicable on a broad range of text analysis tasks, ranging from text annotation and classification to sentiment analysis and critical discourse analysis, many scholars believe that LLMs will transform how we do text analysis. This how-to guide is aimed at students and researchers with limited programming experience, and offers a simple introduction to how LLMs can be used for text analysis in your own research project, as well as advice on best practices. We will go through each of the steps of analyzing textual data with LLMs using Python: installing the software, setting up the API, loading the data, developing an analysis prompt, analyzing the text, and validating the results. As an illustrative example, we will use the challenging task of identifying populism in political texts, and show how LLMs move beyond the existing state-of-the-art

    ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning

    Full text link
    This paper assesses the accuracy, reliability and bias of the Large Language Model (LLM) ChatGPT-4 on the text analysis task of classifying the political affiliation of a Twitter poster based on the content of a tweet. The LLM is compared to manual annotation by both expert classifiers and crowd workers, generally considered the gold standard for such tasks. We use Twitter messages from United States politicians during the 2020 election, providing a ground truth against which to measure accuracy. The paper finds that ChatGPT-4 has achieves higher accuracy, higher reliability, and equal or lower bias than the human classifiers. The LLM is able to correctly annotate messages that require reasoning on the basis of contextual knowledge, and inferences around the author's intentions - traditionally seen as uniquely human abilities. These findings suggest that LLM will have substantial impact on the use of textual data in the social sciences, by enabling interpretive research at a scale.Comment: 5 pages, 3 figure
    • …
    corecore