529 research outputs found
Tectorigenin monohydrate: an isoflavone from Belamcanda chinensis
The title compound [systematic name: 5,7-dihydroxy-3-(4-hydroxyphenyl)-6-methoxy-4H-chromen-4-one monohydrate], C16H12O6·H2O, is isolated from Belamcanda chinensis and is said to have antimicrobiotic and anti-inflammatory effects. The chromen-4-one system and the benzene ring are inclined at a dihedral angle of 36.79 (6)°. Molecules are linked by inter- and intramolecular O—H⋯O hydrogen bonds
Generalized-Equiangular Geometry CT: Concept and Shift-Invariant FBP Algorithms
With advanced X-ray source and detector technologies being continuously
developed, non-traditional CT geometries have been widely explored.
Generalized-Equiangular Geometry CT (GEGCT) architecture, in which an X-ray
source might be positioned radially far away from the focus of arced detector
array that is equiangularly spaced, is of importance in many novel CT systems
and designs. GEGCT, unfortunately, has no theoretically exact and
shift-invariant analytical image reconstruction algorithm in general. In this
study, to obtain fast and accurate reconstruction from GEGCT and to promote its
system design and optimization, an in-depth investigation on a group of
approximate Filtered BackProjection (FBP) algorithms with a variety of
weighting strategies has been conducted. The architecture of GEGCT is first
presented and characterized by using a normalized-radial-offset distance
(NROD). Next, shift-invariant weighted FBP-type algorithms are derived in a
unified framework, with pre-filtering, filtering, and post-filtering weights.
Three viable weighting strategies are then presented including a classic one
developed by Besson in the literature and two new ones generated from a
curvature fitting and from an empirical formula, where all of the three weights
can be expressed as certain functions of NROD. After that, an analysis of
reconstruction accuracy is conducted with a wide range of NROD. We further
stretch the weighted FBP-type algorithms to GEGCT with dynamic NROD. Finally,
the weighted FBP algorithm for GEGCT is extended to a three-dimensional form in
the case of cone-beam scan with a cylindrical detector array.Comment: 31 pages, 13 figure
GLT-T: Global-Local Transformer Voting for 3D Single Object Tracking in Point Clouds
Current 3D single object tracking methods are typically based on VoteNet, a
3D region proposal network. Despite the success, using a single seed point
feature as the cue for offset learning in VoteNet prevents high-quality 3D
proposals from being generated. Moreover, seed points with different importance
are treated equally in the voting process, aggravating this defect. To address
these issues, we propose a novel global-local transformer voting scheme to
provide more informative cues and guide the model pay more attention on
potential seed points, promoting the generation of high-quality 3D proposals.
Technically, a global-local transformer (GLT) module is employed to integrate
object- and patch-aware prior into seed point features to effectively form
strong feature representation for geometric positions of the seed points, thus
providing more robust and accurate cues for offset learning. Subsequently, a
simple yet effective training strategy is designed to train the GLT module. We
develop an importance prediction branch to learn the potential importance of
the seed points and treat the output weights vector as a training constraint
term. By incorporating the above components together, we exhibit a superior
tracking method GLT-T. Extensive experiments on challenging KITTI and NuScenes
benchmarks demonstrate that GLT-T achieves state-of-the-art performance in the
3D single object tracking task. Besides, further ablation studies show the
advantages of the proposed global-local transformer voting scheme over the
original VoteNet. Code and models will be available at
https://github.com/haooozi/GLT-T.Comment: Accepted to AAAI 2023. The source code and models will be available
at https://github.com/haooozi/GLT-
A Unitary Weights Based One-Iteration Quantum Perceptron Algorithm for Non-Ideal Training Sets
In order to solve the problem of non-ideal training sets (i.e., the
less-complete or over-complete sets) and implement one-iteration learning, a
novel efficient quantum perceptron algorithm based on unitary weights is
proposed, where the singular value decomposition of the total weight matrix
from the training set is calculated to make the weight matrix to be unitary.
The example validation of quantum gates {H, S, T, CNOT, Toffoli, Fredkin} shows
that our algorithm can accurately implement arbitrary quantum gates within one
iteration. The performance comparison between our algorithm and other quantum
perceptron algorithms demonstrates the advantages of our algorithm in terms of
applicability, accuracy, and availability. For further validating the
applicability of our algorithm, a quantum composite gate which consists of
several basic quantum gates is also illustrated.Comment: 12 pages, 5 figure
A study on the impact of pre-trained model on Just-In-Time defect prediction
Previous researchers conducting Just-In-Time (JIT) defect prediction tasks
have primarily focused on the performance of individual pre-trained models,
without exploring the relationship between different pre-trained models as
backbones. In this study, we build six models: RoBERTaJIT, CodeBERTJIT,
BARTJIT, PLBARTJIT, GPT2JIT, and CodeGPTJIT, each with a distinct pre-trained
model as its backbone. We systematically explore the differences and
connections between these models. Specifically, we investigate the performance
of the models when using Commit code and Commit message as inputs, as well as
the relationship between training efficiency and model distribution among these
six models. Additionally, we conduct an ablation experiment to explore the
sensitivity of each model to inputs. Furthermore, we investigate how the models
perform in zero-shot and few-shot scenarios. Our findings indicate that each
model based on different backbones shows improvements, and when the backbone's
pre-training model is similar, the training resources that need to be consumed
are much more closer. We also observe that Commit code plays a significant role
in defect detection, and different pre-trained models demonstrate better defect
detection ability with a balanced dataset under few-shot scenarios. These
results provide new insights for optimizing JIT defect prediction tasks using
pre-trained models and highlight the factors that require more attention when
constructing such models. Additionally, CodeGPTJIT and GPT2JIT achieved better
performance than DeepJIT and CC2Vec on the two datasets respectively under 2000
training samples. These findings emphasize the effectiveness of
transformer-based pre-trained models in JIT defect prediction tasks, especially
in scenarios with limited training data
- …