320 research outputs found
Shape optimization for elasto-plastic deformation under shakedown conditions
AbstractAn integrated approach for all necessary variations within direct analysis, variational design sensitivity analysis and shakedown analysis based on Melan’s static shakedown theorem for linear unlimited kinematic hardening material behavior is formulated. Using an adequate formulation of the optimization problem of shakedown analysis the necessary variations of residuals, objectives and constraints can be derived easily. Subsequent discretizations w.r.t. displacements and geometry using e.g. isoparametric finite elements yield the well known ‘tangent stiffness matrix’ and ‘tangent sensitivity matrix’, as well as the corresponding matrices for the variation of the Lagrangian-functional which are discussed in detail. Remarks on the computer implementation and numerical examples show the efficiency of the proposed formulation. Important effects of shakedown conditions in shape optimization with elasto-plastic deformations are highlighted in a comparison with elastic and elasto-plastic material behavior and the necessity of applying shakedown conditions when optimizing structures with elasto-plastic deformations is concluded
Pushing on Personality Detection from Verbal Behavior: A Transformer Meets Text Contours of Psycholinguistic Features
Research at the intersection of personality psychology, computer science, and
linguistics has recently focused increasingly on modeling and predicting
personality from language use. We report two major improvements in predicting
personality traits from text data: (1) to our knowledge, the most comprehensive
set of theory-based psycholinguistic features and (2) hybrid models that
integrate a pre-trained Transformer Language Model BERT and Bidirectional Long
Short-Term Memory (BLSTM) networks trained on within-text distributions ('text
contours') of psycholinguistic features. We experiment with BLSTM models (with
and without Attention) and with two techniques for applying pre-trained
language representations from the transformer model - 'feature-based' and
'fine-tuning'. We evaluate the performance of the models we built on two
benchmark datasets that target the two dominant theoretical models of
personality: the Big Five Essay dataset and the MBTI Kaggle dataset. Our
results are encouraging as our models outperform existing work on the same
datasets. More specifically, our models achieve improvement in classification
accuracy by 2.9% on the Essay dataset and 8.28% on the Kaggle MBTI dataset. In
addition, we perform ablation experiments to quantify the impact of different
categories of psycholinguistic features in the respective personality
prediction models.Comment: accepted at WASSA 202
- …