4,237 research outputs found
Born-Infeld Black Holes in 4D Einstein-Gauss-Bonnet Gravity
A novel four-dimensional Einstein-Gauss-Bonnet gravity was formulated by D.
Glavan and C. Lin [Phys. Rev. Lett. 124, 081301 (2020)], which is intended to
bypass the Lovelock's theorem and to yield a non-trivial contribution to the
four-dimensional gravitational dynamics. However, the validity and consistency
of this theory has been called into question recently. We study a static and
spherically symmetric black hole charged by a Born-Infeld electric field in the
novel four-dimensional Einstein-Gauss-Bonnet gravity. It is found that the
black hole solution still suffers the singularity problem, since particles
incident from infinity can reach the singularity. It is also demonstrated that
the Born-Infeld charged black hole may be superior to the Maxwell charged black
hole to be a charged extension of the Schwarzschild-AdS-like black hole in this
new gravitational theory. Some basic thermodynamics of the black hole solution
is also analyzed. Besides, we regain the black hole solution in the regularized
four-dimensional Einstein-Gauss-Bonnet gravity proposed by H. L\"u and Y. Pang
[arXiv:2003.11552].Comment: 13 pages and 18 figures, published versio
Beyond MLE: Convex Learning for Text Generation
Maximum likelihood estimation (MLE) is a statistical method used to estimate
the parameters of a probability distribution that best explain the observed
data. In the context of text generation, MLE is often used to train generative
language models, which can then be used to generate new text. However, we argue
that MLE is not always necessary and optimal, especially for closed-ended text
generation tasks like machine translation. In these tasks, the goal of model is
to generate the most appropriate response, which does not necessarily require
it to estimate the entire data distribution with MLE. To this end, we propose a
novel class of training objectives based on convex functions, which enables
text generation models to focus on highly probable outputs without having to
estimate the entire data distribution. We investigate the theoretical
properties of the optimal predicted distribution when applying convex functions
to the loss, demonstrating that convex functions can sharpen the optimal
distribution, thereby enabling the model to better capture outputs with high
probabilities. Experiments on various text generation tasks and models show the
effectiveness of our approach. It enables autoregressive models to bridge the
gap between greedy and beam search, and facilitates the learning of
non-autoregressive models with a maximum improvement of 9+ BLEU points.
Moreover, our approach also exhibits significant impact on large language
models (LLMs), substantially enhancing their generative capability on various
tasks. Source code is available at
\url{https://github.com/ictnlp/Convex-Learning}.Comment: NeurIPS 202
Morphing and Sampling Network for Dense Point Cloud Completion
3D point cloud completion, the task of inferring the complete geometric shape
from a partial point cloud, has been attracting attention in the community. For
acquiring high-fidelity dense point clouds and avoiding uneven distribution,
blurred details, or structural loss of existing methods' results, we propose a
novel approach to complete the partial point cloud in two stages. Specifically,
in the first stage, the approach predicts a complete but coarse-grained point
cloud with a collection of parametric surface elements. Then, in the second
stage, it merges the coarse-grained prediction with the input point cloud by a
novel sampling algorithm. Our method utilizes a joint loss function to guide
the distribution of the points. Extensive experiments verify the effectiveness
of our method and demonstrate that it outperforms the existing methods in both
the Earth Mover's Distance (EMD) and the Chamfer Distance (CD).Comment: 8pages, 7 figures, AAAI202
Non-autoregressive Streaming Transformer for Simultaneous Translation
Simultaneous machine translation (SiMT) models are trained to strike a
balance between latency and translation quality. However, training these models
to achieve high quality while maintaining low latency often leads to a tendency
for aggressive anticipation. We argue that such issue stems from the
autoregressive architecture upon which most existing SiMT models are built. To
address those issues, we propose non-autoregressive streaming Transformer
(NAST) which comprises a unidirectional encoder and a non-autoregressive
decoder with intra-chunk parallelism. We enable NAST to generate the blank
token or repetitive tokens to adjust its READ/WRITE strategy flexibly, and
train it to maximize the non-monotonic latent alignment with an alignment-based
latency loss. Experiments on various SiMT benchmarks demonstrate that NAST
outperforms previous strong autoregressive SiMT baselines.Comment: EMNLP 2023 main conference; Source code is available at
https://github.com/ictnlp/NAS
WNT5A Signaling Contributes to Aβ-Induced Neuroinflammation and Neurotoxicity
Neurodegenration is a pathological hallmark of Alzheimer's disease (AD), but the underlying molecular mechanism remains elusive. Here, we present evidence that reveals a crucial role of Wnt5a signaling in this process. We showed that Wnt5a and its receptor Frizzled-5 (Fz5) were up-regulated in the AD mouse brain, and that beta-amyloid peptide (Aβ), a major constituent of amyloid plaques, stimulated Wnt5a and Fz5 expression in primary cortical cultures; these observations indicate that Wnt5a signaling could be aberrantly activated during AD pathogenesis. In support of such a possibility, we observed that inhibition of Wnt5a signaling attenuated while activation of Wnt5a signaling enhanced Aβ-evoked neurotoxicity, suggesting a role of Wnt5a signaling in AD-related neurodegeneration. Furthermore, we also demonstrated that Aβ-induced neurotoxicity depends on inflammatory processes, and that activation of Wnt5a signaling elicited the expression of proinflammatory cytokines IL-1β and TNF-α whereas inhibition of Wnt5a signaling attenuated the Aβ-induced expression of the cytokines in cortical cultures. Our findings collectively suggest that aberrantly up-regulated Wnt5a signaling is a crucial pathological step that contributes to AD-related neurodegeneration by regulating neuroinflammation
Scalar form-factor of the proton with light-cone QCD sum rules
In this article, we calculate the scalar form-factor of the proton in the
framework of the light-cone QCD sum rules approach with the three valence quark
light-cone distribution amplitudes up to twist-6, and observe the scalar
form-factor at intermediate and large momentum transfers has significant contributions from the end-point (or soft) terms. The
numerical values for the are compatible with the calculations
from the chiral quark model and lattice QCD at the region .Comment: 18 pages, 7 figures, revised versio
- …