10 research outputs found
Non-local ductile damage formulations for sheet bulk metal forming
A ductile damage model for sheet bulk metal forming processes and its efficient and accurate treatment in the context of the Finite Element Method is presented. The damage is introduced as a non-local field to overcome pathological mesh dependency. Since standard elements tend to show volumetric locking in the bulk forming process a mixed formulation is implemented in the commercial software simufact.forming to obtain better results.DFG/SFB/TR 7
FairGer: Using NLP to Measure Support for Women and Migrants in 155 Years of German Parliamentary Debates
We measure support with women and migrants in German political debates over
the last 155 years. To do so, we (1) provide a gold standard of 1205 text
snippets in context, annotated for support with our target groups, (2) train a
BERT model on our annotated data, with which (3) we infer large-scale trends.
These show that support with women is stronger than support with migrants, but
both have steadily increased over time. While we hardly find any direct
anti-support with women, there is more polarization when it comes to migrants.
We also discuss the difficulty of annotation as a result of ambiguity in
political discourse and indirectness, i.e., politicians' tendency to relate
stances attributed to political opponents. Overall, our results indicate that
German society, as measured from its political elite, has become fairer over
time
Detecting Stance in Scientific Papers: Did we get more Negative Recently?
In this paper, we classify scientific articles in the domain of natural
language processing (NLP) and machine learning (ML) into whether (i) they
extend the current state-of-the-art by introduction of novel techniques which
beat existing models or whether (ii) they mainly criticize the existing
state-of-the-art, i.e., that it is deficient with respect to some property
(e.g., wrong evaluation, wrong datasets, misleading task specification). We
refer to contributions under (i) as having a "positive stance" and
contributions under (ii) as having a "negative stance" to related work. We
annotate over 2k papers from NLP and ML to train a SciBERT based model to
automatically predict the stance of a paper based on its title and abstract. We
then analyze large-scale trends on over 41k papers from the last ~35 years in
NLP and ML, finding that papers have gotten substantially more positive over
time, but negative papers also got more negative and we observe considerably
more negative papers in recent years. Negative papers are also more influential
in terms of citations they receive