2,167 research outputs found
A Novel Deep Learning Framework for Internal Gross Target Volume Definition from 4D Computed Tomography of Lung Cancer Patients
In this paper, we study the reliability of a novel deep learning framework for internal gross target volume (IGTV) delineation from four-dimensional computed tomography (4DCT), which is applied to patients with lung cancer treated by Stereotactic Body Radiation Therapy (SBRT). 77 patients who underwent SBRT followed by 4DCT scans were incorporated in a retrospective study. The IGTV_DL was delineated using a novel deep machine learning algorithm with a linear exhaustive optimal combination framework, for the purpose of comparison, three other IGTVs base on common methods was also delineated, we compared the relative volume difference (RVI), matching index (MI) and encompassment index (EI) for the above IGTVs. Then, multiple parameter regression analysis assesses the tumor volume and motion range as clinical influencing factors in the MI variation. Experimental results demonstrated that the deep learning algorithm with linear exhaustive optimal combination framework has a higher probability of achieving optimal MI compared with other currently widely used methods. For patients after simple breathing training by keeping the respiratory frequency in 10 BMP, the four phase combinations of 0%, 30%, 50% and 90% can be considered as a potential candidate for an optimal combination to synthesis IGTV in all respiration amplitudes
Sign-changing solutions for a Schrödinger-Kirchhoff-Poisson system with 4-sublinear growth nonlinearity
In this paper we consider the following Schrödinger–Kirchhoff–Poisson-type system where Ω is a bounded smooth domain of R3 , a > 0, b ≥ 0 are constants and λ is a positive parameter. Under suitable conditions on Q(x) and combining the method of invariant sets of descending flow, we establish the existence and multiplicity of signchanging solutions to this problem for the case that 2 < p < 4 as λ sufficiently small. Furthermore, for λ = 1 and the above assumptions on Q(x), we obtain the same conclusions with 2 < p < 12 5
CNN2GNN: How to Bridge CNN with GNN
Although the convolutional neural network (CNN) has achieved excellent
performance in vision tasks by extracting the intra-sample representation, it
will take a higher training expense because of stacking numerous convolutional
layers. Recently, as the bilinear models, graph neural networks (GNN) have
succeeded in exploring the underlying topological relationship among the graph
data with a few graph neural layers. Unfortunately, it cannot be directly
utilized on non-graph data due to the lack of graph structure and has high
inference latency on large-scale scenarios. Inspired by these complementary
strengths and weaknesses, \textit{we discuss a natural question, how to bridge
these two heterogeneous networks?} In this paper, we propose a novel CNN2GNN
framework to unify CNN and GNN together via distillation. Firstly, to break the
limitations of GNN, a differentiable sparse graph learning module is designed
as the head of networks to dynamically learn the graph for inductive learning.
Then, a response-based distillation is introduced to transfer the knowledge
from CNN to GNN and bridge these two heterogeneous networks. Notably, due to
extracting the intra-sample representation of a single instance and the
topological relationship among the datasets simultaneously, the performance of
distilled ``boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN
containing dozens of layers such as ResNet152
Particle-based Variational Inference with Generalized Wasserstein Gradient Flow
Particle-based variational inference methods (ParVIs) such as Stein
variational gradient descent (SVGD) update the particles based on the
kernelized Wasserstein gradient flow for the Kullback-Leibler (KL) divergence.
However, the design of kernels is often non-trivial and can be restrictive for
the flexibility of the method. Recent works show that functional gradient flow
approximations with quadratic form regularization terms can improve
performance. In this paper, we propose a ParVI framework, called generalized
Wasserstein gradient descent (GWG), based on a generalized Wasserstein gradient
flow of the KL divergence, which can be viewed as a functional gradient method
with a broader class of regularizers induced by convex functions. We show that
GWG exhibits strong convergence guarantees. We also provide an adaptive version
that automatically chooses Wasserstein metric to accelerate convergence. In
experiments, we demonstrate the effectiveness and efficiency of the proposed
framework on both simulated and real data problems
Improving Biomedical Entity Linking with Retrieval-enhanced Learning
Biomedical entity linking (BioEL) has achieved remarkable progress with the
help of pre-trained language models. However, existing BioEL methods usually
struggle to handle rare and difficult entities due to long-tailed distribution.
To address this limitation, we introduce a new scheme NN-BioEL, which
provides a BioEL model with the ability to reference similar instances from the
entire training corpus as clues for prediction, thus improving the
generalization capabilities. Moreover, we design a contrastive learning
objective with dynamic hard negative sampling (DHNS) that improves the quality
of the retrieved neighbors during inference. Extensive experimental results
show that NN-BioEL outperforms state-of-the-art baselines on several
datasets.Comment: Accepted by ICASSP 202
- …
