71 research outputs found
Validating Topic Modeling as a Method of Analyzing Sujet and Theme
In Computational Literary Studies (CLS), several procedures for thematic
analysis have been adapted from NLP and Computer Science. Among these
procedures, topic modeling is the most prominent and popular technique. We
maintain, however, that this procedure is used only in the context of exploration
up to date, but not in the context of justification. When we seek to prove assumptions
concerning the correlation between genres, methods of computational
text analysis have to be set up in research environments of justification, i.e. in
environments of hypothesis testing. We provide a holistic model of validation
and conceptual disambiguation of the notion of aboutness as sujet, fabula, and
theme, and discuss essential methodological requirements for hypothesis-based
analysis. As we maintain that validation has to be performed for individual tasks
respectively, we shall perform empirical validation of topic modeling based on
a new corpus of German novellas and comprehensive annotations and draw
hypothetical generalizations on the applicability of topic modeling for analyzing
aboutness in the domain of narrative fiction
Capturing ecosystem service opportunities: a practice-oriented framework for selecting economic instruments in order to enhance biodiversity and human livelihoods
Practitioners in the fields of sustainable development, land management, and biodiversity conservation are increasingly interested in using economic instruments that promise “win-win” solutions for conservation and human livelihoods. However, practitioners often lack guidance for selecting and implementing suitable economic approaches that take the specific local needs and the cultural, legal, and ecological context into account. This paper extracts from the academic debate a series of key aspects to be considered by practitioners who wish to accomplish change of behaviour via economic approaches. The paper then presents a practice-oriented framework for identifying the “ecosystem service opportunities” to conserve biodiversity and improve livelihoods in a specific local setting, and for preselecting suitable economic instruments. The framework is illustrated by describing its application in two pilot sites of the ECO-BEST project in Thailand, as part of which it was developed and road-tested
Key point generation as an instrument for generating core statements of a political debate on Twitter
Identifying key statements in large volumes of short, user-generated texts is essential for decision-makers to quickly grasp their key content. To address this need, this research introduces a novel abstractive key point generation (KPG) approach applicable to unlabeled text corpora, using an unsupervised approach, a feature not yet seen in existing abstractive KPG methods. The proposed method uniquely combines topic modeling for unsupervised data space segmentation with abstractive summarization techniques to efficiently generate semantically representative key points from text collections. This is further enhanced by hyperparameter tuning to optimize both the topic modeling and abstractive summarization processes. The hyperparameter tuning of the topic modeling aims at making the cluster assignment more deterministic as the probabilistic nature of the process would otherwise lead to high variability in the output. The abstractive summarization process is optimized using a Davies-Bouldin Index specifically adapted to this use case, so that the generated key points more accurately reflect the characteristic properties of this cluster. In addition, our research recommends an automated evaluation that provides a quantitative complement to the traditional qualitative analysis of KPG. This method regards KPG as a specialized form of Multidocument summarization (MDS) and employs both word-based and word-embedding-based metrics for evaluation. These criteria allow for a comprehensive and nuanced analysis of the KPG output. Demonstrated through application to a political debate on Twitter, the versatility of this approach extends to various domains, such as product review analysis and survey evaluation. This research not only paves the way for innovative development in abstractive KPG methods but also sets a benchmark for their evaluation
Microblogging during the European Floods 2013: What Twitter May Contribute in German Emergencies
Social media is becoming more and more important in crisis management. However its analysis by emergency services still bears unaddressed challenges and the majority of studies focus on the use of social media in the USA. In this paper German tweets of the European Flood 2013 are therefore captured and analyzed using descriptive statistics, qualitative data coding, and computational algorithms. Our work illustrates that this event provided sufficient German traffic and geo-locations as well as enough original data (not derivative). However, up-to-date Named Entity Recognizer (NER) with German classifier could not recognize German rivers and highways satisfactorily. Furthermore our analysis revealed pragmatic (linguistic) barriers resulting from irony, wordplay, and ambiguity, as well as in retweet-behavior. To ease the analysis of data we suggest a retweet ratio, which is illustrated to be higher with important tweets and may help selecting tweets for mining. We argue that existing software has to be adapted and improved for German language characteristics, also to detect markedness, seriousness and trut
Developing Digital (and) Teaching Skills
Research indicates that lecturers and teachers should engage in regular introspection regarding their teaching practices and competences as a means to enhance their pedagogical aptitude (e.g., Lombarts et al., 2009). Within tertiary education establishments, such as universities, the evaluation of lectures has evolved into a customary instrument for gauging the caliber of academic instruction. Theoretically, educators can harness student evaluations of teaching for self-evaluation. Nevertheless, these all-encompassing tools (1) are predominantly structured for conclusive and evaluative intentions, (2) frequently neglect criteria essential for effective and proficient teaching, (3) and do not furnish precise interventions tailored to discernible gaps. To counterbalance these deficiencies, the Education Development Unit at the University of Bern has devised specialized digital self-assessment instruments, which furnish targeted assistance to educators in refining their teaching methodologies and competencies. At our poster, we will introduce two such tools: (1) SELEVOR, designed to aid educators in conceiving lectures, and (2) SEIDL, which evaluates the levels of digital teaching competence among educators.
SELEVOR assesses teaching concepts based on seven synthesized principles drawn from contemporary theories and models in the psychology of learning and teaching, including works by, among other, Hattie (2012): (1) constructive alignment, (2) target group orientation, (3) problem focus, (4) choice of content, (5) elaboration, (6) adaptive teaching, and (7) teacher engagement. Each principle is evaluated using a four-point Likert scale consisting of five items. Users promptly receive feedback on the alignment of their lecture concepts with each of the seven principles, along with specific algorithmically-derived interventions. Notably, SELEVOR has undergone validation, culminating in the presentation of its refined version. With assistance from SEIDL, educators can seamlessly record and contemplate their personal proficiencies in digitalized instruction through an easily accessible and tailored approach. In contradistinction to SELEVOR, SEIDL employs case vignettes for executing level classification (an adaptation of DigCompEdu, Redecker 2017) within the competency domains of planning, implementation, evaluation, and interaction (a modification of IN.K19, Sailer et al., 2021).
Notice: Both paragraphs were proof-read with the use of ChatGPT (OpenAI, 2023)
References:
Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. New York: Routledge.
Lombarts, K. M., Bucx, M. J., & Arah, O. A. (2009). Development of a system for the evaluation of the teaching qualities of anesthesiology faculty. The Journal of the American Society of Anesthesiologists, 111(4), 709-716.
OpenAI. (2023). ChatGPT (Aug 3 version) [Large language model]. https://chat.openai.com/chat
Redecker, C. (2017). European Framework for the Digital Competence of Educators: DigCompEdu. In Y. Punie (Ed.), EUR 28775 EN (p. 93). Publications Office of the European Union. https://doi.org/10.2760/178382
Sailer, M., Stadler, M., Schultz-Pernice, F., Franke, U., Schöffmann, C., Paniotova, V., Husagic, L., & Fischer, F. (2021). Technology-related teaching skills and attitudes: Validation of a scenario-based self-assessment instrument for teachers. Computers in Human Behavior, 115, 106625. https://doi.org/10.1016/J.CHB.2020.10662
Sustainable financing for biodiversity conservation – a review of experiences in German development cooperation
The financial resources needed for globally implementing the Aichi Biodiversity Targets have been estimated at US$ 150-440 billion per year (CBD COP11, 2012) - of which only a fraction is currently available. Significant efforts have been undertaken in many countries to increase funding for biodiversity conservation. Nonetheless, this funding shortage remains immense, acute and chronic. However, we do not lose biodiversity and ecosystems primarily for lack of conservation funding but also due to poor governance, wrong policies, perverse incentives and other factors. This begs the question: How should limited conservation resources be used? For directly tackling biodiversity threats, for addressing the underlying drivers, or rather for strengthening the financial management and fundraising capacity of implementing organisations? As country contexts differ, so do the answers. This report synthesizes experiences of German development cooperation working towards improved biodiversity finance in eight countries: Viet Nam, Namibia, Tanzania, Cameroon, Madagascar, Mauritania, Ecuador and Peru
Assumptions in ecosystem service assessments: Increasing transparency for conservation
Conservation efforts are increasingly supported by ecosystem service assessments. These assessments depend on complex multi-disciplinary methods, and rely on a number of assumptions which reduce complexity. If assumptions are ambiguous or inadequate, misconceptions and misinterpretations may arise when interpreting results of assessments. An interdisciplinary understanding of assumptions in ecosystem service science is needed to provide consistent conservation recommendations. Here, we synthesise and elaborate on 12 prevalent types of assumptions in ecosystem service assessments. These comprise conceptual and ethical foundations of the ecosystem service concept, assumptions on data collection, indication, mapping, and modelling, on socio-economic valuation and value aggregation, as well as about using assessment results for decision-making. We recommend future assessments to increase transparency about assumptions, and to test and validate them and their potential consequences on assessment reliability. This will support the taking up of assessment results in conservation science, policy and practice.Helmholtz-Gemeinschaft (DE)BiodiversaBundesministerium fĂĽr Bildung und Forschung
http://dx.doi.org/10.13039/501100002347Peer Reviewe
From Keyness to Distinctiveness – Triangulation and Evaluation in Computational Literary Studies
There is a set of statistical measures developed mostly in corpus and computational linguistics and information retrieval, known as keyness measures, which are generally expected to detect textual features that account for differences between two texts or groups of texts. These measures are based on the frequency, distribution, or dispersion of words (or other features). Searching for relevant differences or similarities between two text groups is also an activity that is characteristic of traditional literary studies, whenever two authors, two periods in the work of one author, two historical periods or two literary genres are to be compared. Therefore, applying quantitative procedures in order to search for differences seems to be promising in the field of computational literary studies as it allows to analyze large corpora and to base historical hypotheses on differences between authors, genres and periods on larger empirical evidence. However, applying quantitative procedures in order to answer questions relevant to literary studies in many cases raises methodological problems, which have been discussed on a more general level in the context of integrating or triangulating quantitative and qualitative methods in mixed methods research of the social sciences. This paper aims to solve these methodological issues concretely for the concept of distinctiveness and thus to lay the methodological foundation permitting to operationalize quantitative procedures in order to use them not only as rough exploratory tools, but in a hermeneutically meaningful way for research in literary studies.
Based on a structural definition of potential candidate measures for analyzing distinctiveness in the first section, we offer a systematic description of the issue of integrating quantitative procedures into a hermeneutically meaningful understanding of distinctiveness by distinguishing its epistemological from the methodological perspective. The second section develops a systematic strategy to solve the methodological side of this issue based on a critical reconstruction of the widespread non-integrative strategy in research on keyness measures that can be traced back to Rudolf Carnap’s model of explication. We demonstrate that it is, in the first instance, mandatory to gain a comprehensive qualitative understanding of the actual task. We show that Carnap’s model of explication suffers from a shortcoming that consists in ignoring the need for a systematic comparison of what he calls the explicatum and the explicandum. Only if there is a method of systematic comparison, the next task, namely that of evaluation can be addressed, which verifies whether the output of a quantitative procedure corresponds to the qualitative expectation that must be clarified in advance. We claim that evaluation is necessary for integrating quantitative procedures to a qualitative understanding of distinctiveness. Our reconstruction shows that both steps are usually skipped in empirical research on keyness measures that are the most important point of reference for the development of a measure of distinctiveness. Evaluation, which in turn requires thorough explication and conceptual clarification, needs to be employed to verify this relation.
In the third section we offer a qualitative clarification of the concept of distinctiveness by spanning a three-dimensional conceptual space. This flexible framework takes into account that there is no single and proper concept of distinctiveness but rather a field of possible meanings depending on research interest, theoretical framework, and access to the perceptibility or salience of textual features. Therefore, we shall, instead of stipulating any narrow and strict definition, take into account that each of these aspects – interest, theoretical framework, and access to perceptibility – represents one dimension of the heuristic space of possible uses of the concept of distinctiveness.
The fourth section discusses two possible strategies of operationalization and evaluation that we consider to be complementary to the previously provided clarification, and that complete the task of establishing a candidate measure successfully as a measure of distinctiveness in a qualitatively ambitious sense. We demonstrate that two different general strategies are worth considering, depending on the respective notion of distinctiveness and the interest as elaborated in the third section. If the interest is merely taxonomic, classification tasks based on multi-class supervised machine learning are sufficient. If the interest is aesthetic, more complex and intricate evaluation strategies are required, which have to rely on a thorough conceptual clarification of the concept of distinctiveness, in particular on the idea of salience or perceptibility. The challenge here is to correlate perceivable complex features of texts such as plot, theme (aboutness), style, form, or roles and constellation of fictional characters with the unperceived frequency and distribution of word features that are calculated by candidate measures of distinctiveness. Existing research did not clarify, so far, how to correlate such complex features with individual word features.
The paper concludes with a general reflection on the possibility of mixed methods research for computational literary studies in terms of explanatory power and exploratory use. As our strategy of combining explication and evaluation shows, integration should be understood as a strategy of combining two different perspectives on the object area: in our evaluation scenarios, that of empirical reader response and that of a specific quantitative procedure. This does not imply that measures of distinctiveness, which proved to reach explanatory power in one qualitative aspect, should be supposed to be successful in all fields of research. As long as evaluation is omitted, candidate measures of distinctiveness lack explanatory power and are limited to exploratory use. In contrast with a skepticism that has sometimes been expressed from literary scholars with regard to the relevance of computational literary studies on proper issues of the humanities, we believe that integrating computational methods into hermeneutic literary studies can be achieved in a way that reaches higher explanatory power than the usual exploratory use of keyness measures, but it can only be achieved individually for concrete tasks and not once and for all based on a general theoretical demonstration.See also the publisher version here, accessible via personal request: https://zenodo.org/record/570737
- …