1,058 research outputs found

    Have media texts become more humorous?

    Get PDF
    As a research topic, humour has drawn much attention from multiple disciplines including linguistics. Based on Engelthaler & Hills’ (2018) humour scale, this study developed a measure named Humour Index (HMI) to quantify the degree of humour of texts. This measure was applied to examine the diachronic changes in the degree of humour of American newspapers and magazines across a time span of 118 years (1900-2017) with the use of texts from Corpus of Historical American English (COHA). Besides, the study also discussed the contributions of different types of words to the degree of humour in the two genres. The results show significant uptrends in the degree of humour of both newspapers and magazines in the examined period. Moreover, derogatory and offensive words are found to be less frequently used than other categories of words in both genres. This study provides both theoretical and methodological implications for humour studies and claims or hypotheses of previous research, such as infotainment and linguistic positivity bias

    WhatsUp: An event resolution approach for co-occurring events in social media

    Get PDF
    The rapid growth of social media networks has resulted in the generation of a vast data amount, making it impractical to conduct manual analyses to extract newsworthy events. Thus, automated event detection mechanisms are invaluable to the community. However, a clear majority of the available approaches rely only on data statistics without considering linguistics. A few approaches involved linguistics, only to extract textual event details without the corresponding temporal details. Since linguistics define words’ structure and meaning, a severe information loss can happen without considering them. Targeting this limitation, we propose a novel method named WhatsUp to detect temporal and fine-grained textual event details, using linguistics captured by self-learned word embeddings and their hierarchical relationships and statistics captured by frequency-based measures. We evaluate our approach on recent social media data from two diverse domains and compare the performance with several state-of-the-art methods. Evaluations cover temporal and textual event aspects, and results show that WhatsUp notably outperforms state-of-the-art methods. We also analyse the efficiency, revealing that WhatsUp is sufficiently fast for (near) real-time detection. Further, the usage of unsupervised learning techniques, including self-learned embedding, makes our approach expandable to any language, platform and domain and provides capabilities to understand data-specific linguistics

    BabyStories: Can Reinforcement Learning Teach Baby Language Models to Write Better Stories?

    Full text link
    Language models have seen significant growth in the size of their corpus, leading to notable performance improvements. Yet, there has been limited progress in developing models that handle smaller, more human-like datasets. As part of the BabyLM shared task, this study explores the impact of reinforcement learning from human feedback (RLHF) on language models pretrained from scratch with a limited training corpus. Comparing two GPT-2 variants, the larger model performs better in storytelling tasks after RLHF fine-tuning. These findings suggest that RLHF techniques may be more advantageous for larger models due to their higher learning and adaptation capacity, though more experiments are needed to confirm this finding. These insights highlight the potential benefits of RLHF fine-tuning for language models within limited data, enhancing their ability to maintain narrative focus and coherence while adhering better to initial instructions in storytelling tasks. The code for this work is publicly at https://github.com/Zephyr1022/BabyStories-UTSA.Comment: Accepted to BabyLM workshop at CoNL

    Resource efficient action recognition in videos

    Get PDF
    This thesis traces an innovative journey in the domain of real-world action recognition, in particular focusing on memory and data efficient systems. It begins by introducing a novel approach for smart frame selection, which significantly reduces computational costs in video classification. It further optimizes the action recognition process by addressing the challenges of training time and memory consumption in video transformers, laying a strong foundation for memory efficient action recognition. The thesis then delves into zero-shot learning, focusing on the flaws of the currently existing protocol and establishing a new split for true zero-shot action recognition, ensuring zero overlap between unseen test classes and training or pre-training classes. Building on this, a unique cluster-based representation, optimized using reinforcement learning, is proposed for zero-shot action recognition. Crucially, we show that a joint visual-semantic representation learning is essential for improved performance. We also experiment with feature generation approaches for zero-shot action recognition by introducing a synthetic sample selection methodology extending the utility of zero-shot learning to both images and videos and selecting high-quality samples for synthetic data augmentation. This form of data valuation is then incorporated for our novel video data augmentation approach where we generate video composites using foreground and background mixing of videos. The data valuation helps us choose good composites at a reduced overall cost. Finally, we propose the creation of a meaningful semantic space for action labels. We create a textual description dataset for each action class and propose a novel feature generating approach to maximise the benefits of this semantic space. The research contributes significantly to the field, potentially paving the way for more efficient, resource-friendly, and robust video processing and understanding techniques

    The Socio-Technical Dynamics of Renewable Energy Policies in Germany

    Get PDF
    Growing environmental concerns and human-caused climate change increase the pressure on policymakers for rapid action to transform how societies convert energy, produce goods, or transport freight. Innovation and technological progress may contribute to such transitions. However, technological change is hard to predict, requires time, and may be laden with political conflicts. Although more sustainable technologies are available, incentivizing demand and deployment are crucial to accelerate transitions. As transformations develop over decades, understanding the temporal dynamics of policies is critical for governance. In Germany, the renewable energy act incentivizes the deployment of renewable energy technologies by remunerating electricity fed into the common grid. This dissertation assesses how socio-technical developments of solar and wind energy conversion technologies and the renewable energy act interactively shaped each other. Drawing on frameworks such as technological innovation systems, legitimacy, framing, and policy feedback, the contents of 16,485 newspaper articles and additional empirical studies were scrutinized. Combining methods from natural language processing, machine learning, and statistics, this thesis develops text models to assess changes in content and sentiment in large corpora over time. Three studies focus on the shifts in media framing of the German renewable energy act, the underlying co-evolution of technological and policy processes, and the development of the legitimacy of wind power. The results confirm that renewable energy deployment and policy are contested with varying intensity over time. Where change ought to occur, non-linear dynamics of innovation and technology uptake, growing policy costs, economic interests of incumbents, and technology side effects increasingly complicate policymaking over time. The early phases of the renewable energy act were shaped by positive expectations toward renewable energy technologies, which later shifted towards an emphasis on policy costs. The findings highlight the importance of the prosperity of underlying innovation systems as supporters of policy ambition and maintenance over time. However, policy costs and side effects must be managed effectively to withstand increasing contestation. These results may contribute to advancing the successful governance of sectoral transitions likely to unfold over several decades

    Entity Linking in Low-Annotation Data Settings

    Get PDF
    Recent advances in natural language processing have focused on applying and adapting large pretrained language models to specific tasks. These models, such as BERT (Devlin et al., 2019) and BART (Lewis et al., 2020a), are pretrained on massive amounts of unlabeled text across a variety of domains. The impact of these pretrained models is visible in the task of entity linking, where a mention of an entity in unstructured text is matched to the relevant entry in a knowledge base. State-of-the-art linkers, such as Wu et al. (2020) and De Cao et al. (2021), leverage pretrained models as a foundation for their systems. However, these models are also trained on large amounts of annotated data, which is crucial to their performance. Often these large datasets consist of domains that are easily annotated, such as Wikipedia or newswire text. However, tailoring NLP tools to a narrow variety of textual domains severely restricts their use in the real world. Many other domains, such as medicine or law, do not have large amounts of entity linking annotations available. Entity linking, which serves to bridge the gap between massive unstructured amounts of text and structured repositories of knowledge, is equally crucial in these domains. Yet tools trained on newswire or Wikipedia annotations are unlikely to be well-suited for identifying medical conditions mentioned in clinical notes. As most annotation efforts focus on English, similar challenges can be noted in building systems for non-English text. There is often a relatively small amount of annotated data in these domains. With this being the case, looking to other types of domain-specific data, such as unannotated text or highly-curated structured knowledge bases, is often required. In these settings, it is crucial to translate lessons taken from tools tailored for high-annotation domains into algorithms that are suited for low-annotation domains. This requires both leveraging broader types of data and understanding the unique challenges present in each domain

    Individual Differences in Holistic and Compositional Language Processing

    Get PDF
    Individual differences in cognitive abilities are ubiquitous across the spectrum of proficient language users. Although speakers differ with regard to their memory capacity, ability for inhibiting distraction, and ability to shift between different processing levels, comprehension is generally successful. However, this does not mean it is identical across individuals; listeners and readers may rely on different processing strategies to exploit distributional information in the service of efficient understanding. In the following psycholinguistic reading experiment, we investigate potential sources of individual differences in the processing of co-occurring words. Participants read modifier-noun bigrams like absolute silence in a self-paced reading task. Backward transition probability (BTP) between the two lexemes was used to quantify the prominence of the bigram as a whole in comparison to the frequency of its parts. Of five individual difference measures (processing speed, verbal working memory, cognitive inhibition, global-local scope shifting, and personality), two proved to be significantly associated with the effect of BTP on reading times. Participants who could inhibit a distracting global environment in order to more efficiently retrieve a single part and those that preferred the local level in the shifting task showed greater effects of the co-occurrence probability of the parts. We conclude that some participants are more likely to retrieve bigrams via their parts and their co-occurrence statistics whereas others more readily retrieve the two words together as a single chunked unit
    • …
    corecore