320 research outputs found
Stochastic modelling and numerical simulation of fatigue damage
In continuum damage mechanics, fatigue is a phenomenon associated with a continuous material stiffness reduction. Numerically, it can be simulated as an accumulation of damage process. Since the resistance of concrete material reduces drastically after the initiation of macroscopic cracks, fatigue life can be approximated using damage models as the number of cycles by which the material continuity vanishes. The fatigue scatter is an interpretation of material heterogeneity and uncertain external influences. It can be reproduced by treating the damage evolution as a stochastic process.
Inspired by the application of the stochastic process in molecular physics, the deterministic damage evolution rate of the Lemaitre model is modified as a stochastic differential equation to characterise the random damage increment. The implicit Euler scheme associated with Monte-Carlo simulation is demonstrated as a practical approach to solve the stochastic integration problem. The stochastic damage model is designed carefully to obey the thermodynamic principles and the deterministic damage law. Particular efforts are addressed to determine suitable random distributions, avoiding negative random damage increments in individual realisations, to have a statistically unbiased mean. To adequately approximate the high-cycle fatigue damage with random noise, the "jumping-cycle" algorithms with different extrapolation strategies are investigated.
This damage model is further implemented in the simulation of four-point flexural fatigue of concrete beam, solved by the finite element method. The numerically reproduced fatigue data closely fit the published experimental results and the empirical solution, both in the mean and standard deviation. Compared to the Gaussian white noise, the Weibull random variable has broad applicability to simulate random fatigue damage and other physical processes.Um die Streuung der Messdaten in der Materialermüdung zu beschreiben, wird basierend auf Zufallsprozessen ein phenomenologische Modellierung vorgestellt. Erprobt wird die Modellierung an einem Betonbalken mit ebener Finite Element Diskretisierung, wobei die stochastischen Ermüdungsgleichungen mit der Monte Carlo Methode gelöst werden. Die simulierten Ermüdungsprozesse unter Biegebeanspruchung des quasi-spröden Materialswerden mit experimentellen Daten und etablierten empirischen Gleichungen vergleichen. Um hochzyklische
Beanspruchungen zu behandeln, wird ein „jumping-cycle“ Algorithmus angewendet, mit dem die Rechenzeiten stark reduziert werden. Dieser Modellansatz ermöglicht die Simulation von Ermüdungsprozessen mit probabilistischen Information in einem sehr langen Zeitintervall. In derKontinuums-Modellierung geht der Prozess der Materialermüdung mit einer Degeneration der materiellen Integrität einher, die sich z.B. in der Abnahme des elastischen Moduls niederschlägt. Numerisch wird dies als ein kumulativer Schädigungsprozess modelliert. Weil der Materialwiderstand von Beton nach der Entstehung makroskopischer Risse drastisch abnimmt, kann
die Ermüdungslebensdauer unter zyklischer Beanspruchung durch ein Schädigungsmodell praktisch sehr gut abgeschätzt werden, sobald das Auftreten makroskopischer Risse prognostiziert wird. Die Streuung in experimentell ermittelten Ermüdungskurven kann durch die mikro-Heterogenität der Materialien und Unsicherheiten in weiteren externen Faktoren verstanden werden, mittels einer
Modellierung der Schädigungsentwicklung als stochastische Prozessgleichungen kann diese gut reproduziert werden. In Anlehnung an die Beschreibung stochastischer Prozesse in der theoretischen Physik werden die volutionsgleichungen für die Schädigungsentwicklung des Lemaitre-Modells als
stochastische Differentialgleichungen dargestellt. Diese werden mittels impliziter Euler-Verfahren und Monte-Carlo Methoden effizient gelöst. Um die thermodynamische Konsistenz sicherzustellen, insbesondere negative Inkremente der Schädigungsentwicklung zu vermeiden, und unverzerrte
statistische Mittel-werte zu erhalten, werden klassische Gaußsche Prozesse durch Weibull-Verteilungen substituiert. Für hochzyklische Belastungen werden „jumping-cycle“ Algorithmen hinsichtlich der Extrapolations-strategien systematisch untersucht. Am Beispiel eines Betonträgers unter Biegebeanspruchung wird das Ermüdungsverhalten simuliert und mit experimentellen Ergebnissen aus der Literatur und empirischen Formeln vergleichen. Der vorgeschlagene Modellierungsansatz zeigt eine gute Übereinstimmung der Mittelwerte und Standardabweichungen mit den publizierten Erkenntnissen. Wenngleich die hier
verwendeteWeibull-Statistik im strengen mathematischen Sinne nicht konsistent sein sollte, hat sich diese jedoch als physikalisch konsistent erwiesen, um streuende Ermüdungsschädigung effizient zu beschreiben
DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable Kendall's Rank Correlation
Few-shot learning aims to adapt models trained on the base dataset to novel
tasks where the categories are not seen by the model before. This often leads
to a relatively uniform distribution of feature values across channels on novel
classes, posing challenges in determining channel importance for novel tasks.
Standard few-shot learning methods employ geometric similarity metrics such as
cosine similarity and negative Euclidean distance to gauge the semantic
relatedness between two features. However, features with high geometric
similarities may carry distinct semantics, especially in the context of
few-shot learning. In this paper, we demonstrate that the importance ranking of
feature channels is a more reliable indicator for few-shot learning than
geometric similarity metrics. We observe that replacing the geometric
similarity metric with Kendall's rank correlation only during inference is able
to improve the performance of few-shot learning across a wide range of datasets
with different domains. Furthermore, we propose a carefully designed
differentiable loss for meta-training to address the non-differentiability
issue of Kendall's rank correlation. Extensive experiments demonstrate that the
proposed rank-correlation-based approach substantially enhances few-shot
learning performance
Decoding trust: A reinforcement learning perspective
Behavioral experiments on the trust game have shown that trust and
trustworthiness are universal among human beings, contradicting the prediction
by assuming \emph{Homo economicus} in orthodox Economics. This means some
mechanism must be at work that favors their emergence. Most previous
explanations however need to resort to some factors based upon imitative
learning, a simple version of social learning. Here, we turn to the paradigm of
reinforcement learning, where individuals update their strategies by evaluating
the long-term return through accumulated experience. Specifically, we
investigate the trust game with the Q-learning algorithm, where each
participant is associated with two evolving Q-tables that guide one's decision
making as trustor and trustee respectively. In the pairwise scenario, we reveal
that high levels of trust and trustworthiness emerge when individuals
appreciate both their historical experience and returns in the future.
Mechanistically, the evolution of the Q-tables shows a crossover that resembles
human's psychological changes. We also provide the phase diagram for the game
parameters, where the boundary analysis is conducted. These findings are robust
when the scenario is extended to a latticed population. Our results thus
provide a natural explanation for the emergence of trust and trustworthiness
without external factors involved. More importantly, the proposed paradigm
shows the potential in deciphering many puzzles in human behaviors.Comment: 12 pages, 11 figures. Comments are appreciate
Learning with Noisily-labeled Class-imbalanced Data
Real-world large-scale datasets are both noisily labeled and
class-imbalanced. The issues seriously hurt the generalization of trained
models. It is hence significant to address the simultaneous incorrect labeling
and class-imbalance, i.e., the problem of learning with noisy labels on
long-tailed data. Previous works develop several methods for the problem.
However, they always rely on strong assumptions that are invalid or hard to be
checked in practice. In this paper, to handle the problem and address the
limitations of prior works, we propose a representation calibration method
RCAL. Specifically, RCAL works with the representations extracted by
unsupervised contrastive learning. We assume that without incorrect labeling
and class imbalance, the representations of instances in each class conform to
a multivariate Gaussian distribution, which is much milder and easier to be
checked. Based on the assumption, we recover underlying representation
distributions from polluted ones resulting from mislabeled and class-imbalanced
data. Additional data points are then sampled from the recovered distributions
to help generalization. Moreover, during classifier training, representation
learning takes advantage of representation robustness brought by contrastive
learning, which further improves the classifier performance. Experiments on
multiple benchmarks justify our claims and confirm the superiority of the
proposed method
AutoEval-Video: An Automatic Benchmark for Assessing Large Vision Language Models in Open-Ended Video Question Answering
We propose a novel and challenging benchmark, AutoEval-Video, to
comprehensively evaluate large vision-language models in open-ended video
question answering. The comprehensiveness of AutoEval-Video is demonstrated in
two aspects: 1) AutoEval-Video constructs open-ended video-questions across 9
skill dimensions, addressing capabilities of perception, comprehension, and
generation. 2) AutoEval-Video contains newly collected videos that cover over
40 distinct themes. To efficiently evaluate responses to the open-ended
questions, we employ an LLM-based evaluation approach, but instead of merely
providing a reference answer, we annotate unique evaluation rules for every
single instance (video-question pair). To maximize the robustness of these
rules, we develop a novel adversarial annotation mechanism. By using
instance-specific rules as prompt, GPT-4, as an automatic evaluator, can
achieve a stable evaluation accuracy of around 97.0\%, comparable to the 94.9\%
- 97.5\% accuracy of a human evaluator. Furthermore, we assess the performance
of eight large vision-language models on AutoEval-Video. Among them,
GPT-4V(ision) significantly outperforms other models, achieving an accuracy of
32.2\%. However, there is still substantial room for improvement compared to
human accuracy of 72.8\%. By conducting an extensive case study, we uncover
several drawbacks of GPT-4V, such as limited temporal and dynamic
comprehension, and overly general responses. Code is available at
\href{https://github.com/Xiuyuan-Chen/AutoEval-Video}{\color{magenta}https://github.com/Xiuyuan-Chen/AutoEval-Video}
FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?
Few-shot learning aims to train models that can be generalized to novel
classes with only a few samples. Recently, a line of works are proposed to
enhance few-shot learning with accessible semantic information from class
names. However, these works focus on improving existing modules such as visual
prototypes and feature extractors of the standard few-shot learning framework.
This limits the full potential use of semantic information. In this paper, we
propose a novel few-shot learning framework that uses pre-trained language
models based on contrastive learning. To address the challenge of alignment
between visual features and textual embeddings obtained from text-based
pre-trained language model, we carefully design the textual branch of our
framework and introduce a metric module to generalize the cosine similarity.
For better transferability, we let the metric module adapt to different
few-shot tasks and adopt MAML to train the model via bi-level optimization.
Moreover, we conduct extensive experiments on multiple benchmarks to
demonstrate the effectiveness of our method
Information Flow in Self-Supervised Learning
In this paper, we provide a comprehensive toolbox for understanding and
enhancing self-supervised learning (SSL) methods through the lens of matrix
information theory. Specifically, by leveraging the principles of matrix mutual
information and joint entropy, we offer a unified analysis for both contrastive
and feature decorrelation based methods. Furthermore, we propose the matrix
variational masked auto-encoder (M-MAE) method, grounded in matrix information
theory, as an enhancement to masked image modeling. The empirical evaluations
underscore the effectiveness of M-MAE compared with the state-of-the-art
methods, including a 3.9% improvement in linear probing ViT-Base, and a 1%
improvement in fine-tuning ViT-Large, both on ImageNet
- …