23 research outputs found
Estimation of failure probability in braced excavation using Bayesian networks with integrated model updating
A probabilistic model is proposed that uses observation data to estimate failure probabilities during excavations. The model integrates a Bayesian network and distanced-based Bayesian model updating. In the network, the movement of a retaining wall is selected as the indicator of failure, and the observed ground surface settlement is used to update the soil parameters. The responses of wall deflection and ground surface settlement are accurately predicted using finite element analysis. An artificial neural network is employed to construct the response surface relationship using the aforementioned input factors. The proposed model effectively estimates the uncertainty of influential factors. A case study of a braced excavation is presented to demonstrate the feasibility of the proposed approach. The update results facilitate accurate estimates according to the target value, from which the corresponding probabilities of failure are obtained. The proposed model enables failure probabilities to be determined with real-time result updating
TransVCL: Attention-enhanced Video Copy Localization Network with Flexible Supervision
Video copy localization aims to precisely localize all the copied segments
within a pair of untrimmed videos in video retrieval applications. Previous
methods typically start from frame-to-frame similarity matrix generated by
cosine similarity between frame-level features of the input video pair, and
then detect and refine the boundaries of copied segments on similarity matrix
under temporal constraints. In this paper, we propose TransVCL: an
attention-enhanced video copy localization network, which is optimized directly
from initial frame-level features and trained end-to-end with three main
components: a customized Transformer for feature enhancement, a correlation and
softmax layer for similarity matrix generation, and a temporal alignment module
for copied segments localization. In contrast to previous methods demanding the
handcrafted similarity matrix, TransVCL incorporates long-range temporal
information between feature sequence pair using self- and cross- attention
layers. With the joint design and optimization of three components, the
similarity matrix can be learned to present more discriminative copied
patterns, leading to significant improvements over previous methods on
segment-level labeled datasets (VCSL and VCDB). Besides the state-of-the-art
performance in fully supervised setting, the attention architecture facilitates
TransVCL to further exploit unlabeled or simply video-level labeled data.
Additional experiments of supplementing video-level labeled datasets including
SVD and FIVR reveal the high flexibility of TransVCL from full supervision to
semi-supervision (with or without video-level annotation). Code is publicly
available at https://github.com/transvcl/TransVCL.Comment: Accepted by the Thirty-Seventh AAAI Conference on Artificial
Intelligence(AAAI2023
Bayesian updating of soil-water character curve parameters based on the monitor data of a large-scale landslide model experiment
It is important to determine the soil-water characteristic curve (SWCC) for analyzing landslide seepage under varying hydrodynamic conditions. However, the SWCC exhibits high uncertainty due to the variability inherent in soil. To this end, a Bayesian updating framework based on the experimental data was developed to investigate the uncertainty of the SWCC parameters in this study. The objectives of this research were to quantify the uncertainty embedded within the SWCC and determine the critical factors affecting an unsaturated soil landslide under hydrodynamic conditions. For this purpose, a large-scale landslide experiment was conducted, and the monitored water content data were collected. Steady-state seepage analysis was carried out using the finite element method (FEM) to simulate the slope behavior during water level change. In the proposed framework, the parameters of the SWCC model were treated as random variables and parameter uncertainties were evaluated using the Bayesian approach based on the Markov chain Monte Carlo (MCMC) method. Observed data from large-scale landslide experiments were used to calculate the posterior information of SWCC parameters. Then, 95% confidence intervals for the model parameters of the SWCC were derived. The results show that the Bayesian updating method is feasible for the monitoring of data of large-scale landslide model experiments. The establishment of an artificial neural network (ANN) surrogate model in the Bayesian updating process can greatly improve the efficiency of Bayesian model updating
Optimization or Bayesian strategy? Performance of the Bhattacharyya distance in different algorithms of stochastic model updating
The Bhattacharyya distance has been developed as a comprehensive uncertainty quantification metric by capturing multiple uncertainty sources from both numerical predictions and experimental measurements. This work pursues a further investigation of the performance of the Bhattacharyya distance in different methodologies for stochastic model updating, and thus to prove the universality of the Bhattacharyya distance in various currently popular updating procedures. The first procedure is the Bayesian model updating where the Bhattacharyya distance is utilized to define an approximate likelihood function and the transitional Markov chain Monte Carlo algorithm is employed to obtain the posterior distribution of the parameters. In the second updating procedure, the Bhattacharyya distance is utilized to construct the objective function of an optimization problem. The objective function is defined as the Bhattacharyya distance between the samples of numerical prediction and the samples of the target data. The comparison study is performed on a four degrees-of-freedom mass-spring system. A challenging task is raised in this example by assigning different distributions to the parameters with imprecise distribution coefficients. This requires the stochastic updating procedure to calibrate not the parameters themselves, but their distribution properties. The second example employs the GARTEUR SM-AG19 benchmark structure to demonstrate the feasibility of the Bhattacharyya distance in the presence of practical experiment uncertainty raising from measuring techniques, equipment, and subjective randomness. The results demonstrate the Bhattacharyya distance as a comprehensive and universal uncertainty quantification metric in stochastic model updating
Video Infringement Detection via Feature Disentanglement and Mutual Information Maximization
The self-media era provides us tremendous high quality videos. Unfortunately,
frequent video copyright infringements are now seriously damaging the interests
and enthusiasm of video creators. Identifying infringing videos is therefore a
compelling task. Current state-of-the-art methods tend to simply feed
high-dimensional mixed video features into deep neural networks and count on
the networks to extract useful representations. Despite its simplicity, this
paradigm heavily relies on the original entangled features and lacks
constraints guaranteeing that useful task-relevant semantics are extracted from
the features.
In this paper, we seek to tackle the above challenges from two aspects: (1)
We propose to disentangle an original high-dimensional feature into multiple
sub-features, explicitly disentangling the feature into exclusive
lower-dimensional components. We expect the sub-features to encode
non-overlapping semantics of the original feature and remove redundant
information.
(2) On top of the disentangled sub-features, we further learn an auxiliary
feature to enhance the sub-features. We theoretically analyzed the mutual
information between the label and the disentangled features, arriving at a loss
that maximizes the extraction of task-relevant information from the original
feature.
Extensive experiments on two large-scale benchmark datasets (i.e., SVD and
VCSL) demonstrate that our method achieves 90.1% TOP-100 mAP on the large-scale
SVD dataset and also sets the new state-of-the-art on the VCSL benchmark
dataset. Our code and model have been released at
https://github.com/yyyooooo/DMI/, hoping to contribute to the community.Comment: This paper is accepted by ACM MM 202
Learning Segment Similarity and Alignment in Large-Scale Content Based Video Retrieval
With the explosive growth of web videos in recent years, large-scale
Content-Based Video Retrieval (CBVR) becomes increasingly essential in video
filtering, recommendation, and copyright protection. Segment-level CBVR
(S-CBVR) locates the start and end time of similar segments in finer
granularity, which is beneficial for user browsing efficiency and infringement
detection especially in long video scenarios. The challenge of S-CBVR task is
how to achieve high temporal alignment accuracy with efficient computation and
low storage consumption. In this paper, we propose a Segment Similarity and
Alignment Network (SSAN) in dealing with the challenge which is firstly trained
end-to-end in S-CBVR. SSAN is based on two newly proposed modules in video
retrieval: (1) An efficient Self-supervised Keyframe Extraction (SKE) module to
reduce redundant frame features, (2) A robust Similarity Pattern Detection
(SPD) module for temporal alignment. In comparison with uniform frame
extraction, SKE not only saves feature storage and search time, but also
introduces comparable accuracy and limited extra computation time. In terms of
temporal alignment, SPD localizes similar segments with higher accuracy and
efficiency than existing deep learning methods. Furthermore, we jointly train
SSAN with SKE and SPD and achieve an end-to-end improvement. Meanwhile, the two
key modules SKE and SPD can also be effectively inserted into other video
retrieval pipelines and gain considerable performance improvements.
Experimental results on public datasets show that SSAN can obtain higher
alignment accuracy while saving storage and online query computational cost
compared to existing methods.Comment: Accepted by ACM MM 202
Heterochromatin protein 1α mediates development and aggressiveness of neuroendocrine prostate cancer
Neuroendocrine prostate cancer (NEPC) is a lethal subtype of prostate cancer (PCa) arising mostly from adenocarcinoma via NE transdifferentiation following androgen deprivation therapy. Mechanisms contributing to both NEPC development and its aggressiveness remain elusive. In light of the fact that hyperchromatic nuclei are a distinguishing histopathological feature of NEPC, we utilized transcriptomic analyses of our patient-derived xenograft (PDX) models, multiple clinical cohorts, and genetically engineered mouse models to identify 36 heterochromatin-related genes that are significantly enriched in NEPC. Longitudinal analysis using our unique, first-in-field PDX model of adenocarcinoma-to-NEPC transdifferentiation revealed that, among those 36 heterochromatin-related genes, heterochromatin protein 1α (HP1α) expression increased early and steadily during NEPC development and remained elevated in the developed NEPC tumor. Its elevated expression was further confirmed in multiple PDX and clinical NEPC samples. HP1α knockdown in the NCI-H660 NEPC cell line inhibited proliferation, ablated colony formation, and induced apoptotic cell death, ultimately leading to tumor growth arrest. Its ectopic expression significantly promoted NE transdifferentiation in adenocarcinoma cells subjected to androgen deprivation treatment. Mechanistically, HP1α reduced expression of androgen receptor (AR) and RE1 silencing transcription factor (REST) and enriched the repressive trimethylated histone H3 at Lys9 (H3K9me3) mark on their respective gene promoters. These observations indicate a novel mechanism underlying NEPC development mediated by abnormally expressed heterochromatin genes, with HP1α as an early functional mediator and a potential therapeutic target for NEPC prevention and management
Optimization or Bayesian strategy? Performance of the Bhattacharyya distance in different algorithms of stochastic model updating
The Bhattacharyya distance has been developed as a comprehensive uncertainty quantification metric by capturing multiple uncertainty sources from both numerical predictions and experimental measurements. This work pursues a further investigation of the performance of the Bhattacharyya distance in different methodologies for stochastic model updating, and thus to prove the universality of the Bhattacharyya distance in various currently popular updating procedures. The first procedure is the Bayesian model updating where the Bhattacharyya distance is utilized to define an approximate likelihood function and the transitional Markov chain Monte Carlo algorithm is employed to obtain the posterior distribution of the parameters. In the second updating procedure, the Bhattacharyya distance is utilized to construct the objective function of an optimization problem. The objective function is defined as the Bhattacharyya distance between the samples of numerical prediction and the samples of the target data. The comparison study is performed on a four degrees-of-freedom mass-spring system. A challenging task is raised in this example by assigning different distributions to the parameters with imprecise distribution coefficients. This requires the stochastic updating procedure to calibrate not the parameters themselves, but their distribution properties. The second example employs the GARTEUR SM-AG19 benchmark structure to demonstrate the feasibility of the Bhattacharyya distance in the presence of practical experiment uncertainty raising from measuring techniques, equipment, and subjective randomness. The results demonstrate the Bhattacharyya distance as a comprehensive and universal uncertainty quantification metric in stochastic model updating.</p