192 research outputs found
Wörter im Zweifel. Ansätze einer linguistisch begründeten kritischen Semantik
Within the framework of pronunciation, morphology, and syntax, linguistic doubts and uncertainties belong to the everyday business and everyday life of linguists and public speakers. Cases of lexical semantic doubts, however, have been brushed aside by post-Saussurian and all the more by post-Bloomfieldian linguists so far. This paper deals with such lexical semantic doubts and uncertainties which not only rise from language use within "parole" but are caused by a conflict of opposite or even contradictory language norms of different "langue"-varieties (e.g. the adjective positive as a medical term and in everyday colloquial language). Proceeding from an intensive discussion of the relationship between language use, language norms, and language system with special reference to Saussure, Hjelmslev, and Coseriu, the paper presents a theoretical approach to linguistically founded criticism of semantic norms by way of embedding lexical semantics into a framework of "Existenzweisen" (focusing on language use, norms, and system), "Existenzformen" (in particular norms of special discourses and varieties), and "language history and social stratification" (in particular age-based vocabularies and semantics). Finally, the paper turns this theoretic approach into practice by way of describing, explaining, and solving semantic doubts of the German word Zigeuner (gipsy). This example taken from the discourse on "political correctness" is investigated in the broader context of the word's historical, semantic, and pragmatic dimensions in different German discourses and varieties. The systematic relations between the various usages are presented in the form of a pragmatic and semantic network displaying the interconnections and borders between the meanings and gives recommendations to the word's use
Der Däne[NGr Nom] ist[Vfin] gemütlich[ADJGr] Nationale Stereotype aus dem SMiK-Projekt und Kritische Grammatik im Deutschunterricht
One of the most frequently used grammatical structures to express national stereotypes in German is the sentence pattern X [NGr Nom] ist [Vfin] Y [ADJGr]. Within the framework of linguistic research on stereotypes this structure has been described in great detail so that its syntactic, semantic, and socio-pragmatic dimensions are known rather well. Results of the SMiK-project confirm the important role this pattern plays in expressing stereotypes. As to grammar teaching at school, however, this grammatical structure is rarely discussed in textbooks and, assumingly, in classrooms. This paper presents an approach to make this structure a subject of classroom discussion. The approach is based on a didactic concept of “critical grammar” in German lessons and aims at making school students aware that explicit knowledge about grammatical structures is not only (if at all) knowledge about linguistic categories for the purpose of filing language into scientific regulations or rules but an instrument to discover and to elucidate “pictures in our heads”
Nietzsche and Ecological Reason(s) in the Anthropocene
Ecosophical discourses around the ecological condition that is sometimes referred to as the “Anthropocene” require a fundamental rethinking of key concepts of occidental philosophy, including reason. Nietzsche’s body of work offers manifold tools for the rethinking of reason, and this paper seeks to apply them to achieve a “new ecological image of thought.” It will demonstrate 1) how there is a clear ecological awareness motivating Nietzsche’s affirmative critique of reason, 2) how one can find rudiments of a pluralization of the concept of reason in Nietzsche's body of work, as well as 3) traces of a new, qualitatively different form of ecological reason for the time called the Anthropocene – with all its problems and possibilities. In doing so, this paper will demonstrate how Nietzsche can be very productively applied to contemporary eco-philosophical discussions
A hybrid model for Rydberg gases including exact two-body correlations
A model for the simulation of ensembles of laser-driven Rydberg-Rydberg
interacting multi-level atoms is discussed. Our hybrid approach combines an
exact two-body treatment of nearby atom pairs with an effective approximate
treatment for spatially separated pairs. We propose an optimized evolution
equation based only on the system steady state, and a time-independent Monte
Carlo technique is used to efficiently determine this steady state. The hybrid
model predicts features in the pair correlation function arising from
multi-atom processes which existing models can only partially reproduce. Our
interpretation of these features shows that higher-order correlations are
relevant already at low densities. Finally, we analyze the performance of our
model in the high-density case.Comment: significantly expanded and revised version (more observables,
high-density regime); 9 pages, 8 figure
Aggregating Capacity in FL through Successive Layer Training for Computationally-Constrained Devices
Federated learning (FL) is usually performed on resource-constrained edge
devices, e.g., with limited memory for the computation. If the required memory
to train a model exceeds this limit, the device will be excluded from the
training. This can lead to a lower accuracy as valuable data and computation
resources are excluded from training, also causing bias and unfairness. The FL
training process should be adjusted to such constraints. The state-of-the-art
techniques propose training subsets of the FL model at constrained devices,
reducing their resource requirements for training. But these techniques largely
limit the co-adaptation among parameters of the model and are highly
inefficient, as we show: it is actually better to train a smaller (less
accurate) model by the system where all the devices can train the model
end-to-end, than applying such techniques. We propose a new method that enables
successive freezing and training of the parameters of the FL model at devices,
reducing the training's resource requirements at the devices, while still
allowing enough co-adaptation between parameters. We show through extensive
experimental evaluation that our technique greatly improves the accuracy of the
trained model (by 52.4 p.p.) compared with the state of the art, efficiently
aggregating the computation capacity available on distributed devices.Comment: accepted at NeurIPS'2
Exploring Audience’s Attitudes Towards Machine Learning-based Automation in Comment Moderation
Digital technologies, particularly the internet, led to unprecedented opportunities to freely inform oneself, debate, and share thoughts. However, the reduced level of control through traditional gatekeepers such as journalists alsoled to a surge in problematic (e.g., fake news), straight-up abusive, and hateful content (e.g., hate speech). Being under ethical and often legal pressures, many operators of platforms respond to the onslaught of abusive user-generated content by introducing automated, machine learning-enabled moderation tools. Even though meant to protect online audiences, such systems have massive implications regarding free speech, algorithmic fairness, and algorithmic transparency. We set forth to present a large-scale survey experiment that aims at illuminating how the degree of transparency influences the commenter’s acceptance of the machine-made decision, dependent on its outcome. With the presented study design, we seek to determine the necessary amount of transparency needed for automated comment moderation to be accepted by commenters
Ab initio quantum models for thin-film x-ray cavity QED
We develop two ab initio quantum approaches to thin-film x-ray cavity quantum
electrodynamics with spectrally narrow x-ray resonances, such as those provided
by M\"ossbauer nuclei. The first method is based on a few-mode description of
the cavity, and promotes and extends existing phenomenological few-mode models
to an ab initio theory. The second approach uses analytically-known Green's
functions to model the system. The two approaches not only enable one to ab
initio derive the effective few-level scheme representing the cavity and the
nuclei in the low-excitation regime, but also provide a direct avenue for
studies at higher excitation, involving non-linear or quantum phenomena. The ab
initio character of our approaches further enables direct optimizations of the
cavity structure and thus of the photonic environment of the nuclei, to tailor
the effective quantum optical level scheme towards particular applications. To
illustrate the power of the ab initio approaches, we extend the established
quantum optical modeling to resonant cavity layers of arbitrary thickness,
which is essential to achieve quantitative agreement for cavities used in
recent experiments. Further, we consider multi-layer cavities featuring
electromagnetically induced transparency, derive their quantum optical
few-level systems ab initio, and identify the origin of discrepancies in the
modeling found previously using phenomenological approaches as arising from
cavity field gradients across the resonant layers.Comment: 41 pages, 20 figures, added clarifications and minor correction
Monte Carlo calculated ionization chamber correction factors in clinical proton beams – deriving uncertainties from published data
For the update of the IAEA TRS-398 Code of Practice (CoP), global ionization chamber factors (fQ) and beam quality correction factors (kQ) for air-filled ionization chambers in clinical proton beams have been calculated with different Monte Carlo codes. In this study, average Monte Carlo calculated fQ and kQ factors are provided and the uncertainty of these factors is estimated. Average fQ factors in monoenergetic proton beams with energies between 60 MeV and 250 MeV were derived from Monte Carlo calculated fQ factors published in the literature. Altogether, 195 fQ factors for six plane-parallel and three cylindrical ionization chambers calculated with PENH, FLUKA and GEANT4 were incorporated. Additionally, a weighted standard deviation of fQ factors was calculated, where the same weight was assigned to each Monte Carlo code. From average fQ factors, kQ factors were derived and compared to the values from the IAEA TRS-398 CoP published in 2000 as well as to the values of the upcoming version. Average Monte Carlo calculated fQ factors are constant within 0.6% over the energy range investigated. In general, the different Monte Carlo codes agree within 1% for low energies and show larger differences up to 2% for high energies. As a result, the standard deviation of fQ factors increases with energy and is ∼0.3% for low energies and ∼0.8% for high energies. kQ factors derived from average Monte Carlo calculated fQ factors differ from the values presented in the IAEA TRS-398 CoP by up to 2.4%. The overall estimated uncertainty of Monte Carlo calculated kQ factors is ∼0.5%–1% smaller than the uncertainties estimated in IAEA TRS-398 CoP since the individual ionization chamber characteristics (e.g. fluence perturbations) are considered in detail in Monte Carlo calculations. The agreement between Monte Carlo calculated kQ factors and the values of the upcoming version of IAEA TRS-398 CoP is better with deviations smaller than 1%.</p
- …