215 research outputs found
Performance analysis of parallel gravitational -body codes on large GPU cluster
We compare the performance of two very different parallel gravitational
-body codes for astrophysical simulations on large GPU clusters, both
pioneer in their own fields as well as in certain mutual scales - NBODY6++ and
Bonsai. We carry out the benchmark of the two codes by analyzing their
performance, accuracy and efficiency through the modeling of structure
decomposition and timing measurements. We find that both codes are heavily
optimized to leverage the computational potential of GPUs as their performance
has approached half of the maximum single precision performance of the
underlying GPU cards. With such performance we predict that a speed-up of
can be achieved when up to 1k processors and GPUs are employed
simultaneously. We discuss the quantitative information about comparisons of
two codes, finding that in the same cases Bonsai adopts larger time steps as
well as relative energy errors than NBODY6++, typically ranging from
times larger, depending on the chosen parameters of the codes. While the two
codes are built for different astrophysical applications, in specified
conditions they may overlap in performance at certain physical scale, and thus
allowing the user to choose from either one with finetuned parameters
accordingly.Comment: 15 pages, 7 figures, 3 tables, accepted for publication in Research
in Astronomy and Astrophysics (RAA
A CONVERGENCE OF EXTRINSIC AND INTRINSIC SIGNALS FOR POSTMITOTIC DIFFERENTIATION OF NOCICEPTORS
Diverse neuronal subtypes are the building blocks of functional neural circuits that underlie behaviors. The generation of correct types of neurons at appropriate times and positions is therefore fundamental to the development of the nervous system. Specification of neuronal subtypes is a multistep process that extends beyond the initial specification of neural progenitors and continues as postmitotic neurons differentiate further. The postmitotic aspect of neuronal subtype specification, although important for generation of neuronal subtype diversity, remains understudied. Here, using nociceptors, a class of primary sensory neurons in the dorsal root ganglion (DRG) that detect painful stimuli, as a model system and a combination of in vivo and in vitro approaches, we uncover a novel mechanism by which NGF, the prototypic neurotrophic factor and Runx1, a Runx family transcription factor, coordinate the specification of nonpeptidergic nociceptors, a major, well-characterized nociceptor subtype. We show that NGF promotes Runx1-dependent transcription that confers molecular and morphological identity of nonpeptidergic nociceptors through transcriptional upregulation of Cbfb. The protein product of Cbfb, CBFβ, is an integral component of the heterodimeric Runx1/CBFβ complex in DRGs, since conditional deletion of Cbfb in DRGs produces the same spectrum of phenotypes in nonpeptidergic nociceptors as observed in Runx1 mutants. NGF is necessary for Cbfb expression prior to the onset of NGF dependence of Runx1, implicating CBFβ as a critical link between NGF signaling and Runx1 function. NGF activates Cbfb expression through a MEK/ERK pathway. On the other hand, transcriptional initiation of Runx1 requires Islet1, a LIM-homeodomain transcription factor, while Cbfb expression is largely Islet1-independent. These findings together reveal a novel NGF/TrkA–MEK/ERK–Runx1/CBFβ axis that promotes gene expression and maturation of nonpeptidergic nociceptors and provide a common principle by which a convergence of extrinsic and intrinsic signals instructs postmitotic neuronal subtype specification
Yield improvement of exopolysaccharides by screening of the Lactobacillus acidophilus ATCC and optimization of the fermentation and extraction conditions
Exopolysacharides (EPS) produced by Lactobacillus acidophilus play an important role in food processing with its well-recognized antioxidant activity. In this study, a L. acidophilus mutant strain with high-yielding EPS (2.92±0.05 g/L) was screened by chemical mutation (0.2 % diethyl sulfate). Plackett-Burman (PB) design and response surface methodology (RSM) were applied to optimize the EPS fermentation parameters and central composite design (CCD) was used to optimize the EPS extraction parameters. A strain with high-yielding EPS was screened. It was revealed that three parameters (Tween 80, dipotassium hydrogen phosphate and trisodium citrate) had significant influence (P < 0.05) on the EPS yield. The optimal culture conditions for EPS production were: Tween 80 0.6 mL, dipotassium hydrogen phosphate 3.6 g and trisodium citrate 4.1 g (with culture volume of 1 L). In these conditions, the maximum EPS yield was 3.96±0.08 g/L. The optimal extraction conditions analyzed by CCD were: alcohol concentration 70 %, the ratio of material to liquid (M/L ratio) 1:3.6 and the extraction time 31 h. In these conditions, the maximum EPS extraction yield was 1.48±0.23 g/L. It was confirmed by the verification experiments that the EPS yield from L. acidophilus mutant strains reached 5.12±0.73 g/L under the optimized fermentation and extraction conditions, which was 3.8 times higher than that of the control (1.05±0.06 g/L). The results indicated that the strain screening with high-yielding EPS was successful and the optimized fermentation and extraction conditions significantly enhanced EPS yield. It was efficient and industrially promising
SAR Ship Target Recognition via Selective Feature Discrimination and Multifeature Center Classifier
Maritime surveillance is not only necessary for every country, such as in
maritime safeguarding and fishing controls, but also plays an essential role in
international fields, such as in rescue support and illegal immigration
control. Most of the existing automatic target recognition (ATR) methods
directly send the extracted whole features of SAR ships into one classifier.
The classifiers of most methods only assign one feature center to each class.
However, the characteristics of SAR ship images, large inner-class variance,
and small interclass difference lead to the whole features containing useless
partial features and a single feature center for each class in the classifier
failing with large inner-class variance. We proposes a SAR ship target
recognition method via selective feature discrimination and multifeature center
classifier. The selective feature discrimination automatically finds the
similar partial features from the most similar interclass image pairs and the
dissimilar partial features from the most dissimilar inner-class image pairs.
It then provides a loss to enhance these partial features with more interclass
separability. Motivated by divide and conquer, the multifeature center
classifier assigns multiple learnable feature centers for each ship class. In
this way, the multifeature centers divide the large inner-class variance into
several smaller variances and conquered by combining all feature centers of one
ship class. Finally, the probability distribution over all feature centers is
considered comprehensively to achieve an accurate recognition of SAR ship
images. The ablation experiments and experimental results on OpenSARShip and
FUSAR-Ship datasets show that our method has achieved superior recognition
performance under decreasing training SAR ship samples
Crucial Feature Capture and Discrimination for Limited Training Data SAR ATR
Although deep learning-based methods have achieved excellent performance on
SAR ATR, the fact that it is difficult to acquire and label a lot of SAR images
makes these methods, which originally performed well, perform weakly. This may
be because most of them consider the whole target images as input, but the
researches find that, under limited training data, the deep learning model
can't capture discriminative image regions in the whole images, rather focus on
more useless even harmful image regions for recognition. Therefore, the results
are not satisfactory. In this paper, we design a SAR ATR framework under
limited training samples, which mainly consists of two branches and two
modules, global assisted branch and local enhanced branch, feature capture
module and feature discrimination module. In every training process, the global
assisted branch first finishes the initial recognition based on the whole
image. Based on the initial recognition results, the feature capture module
automatically searches and locks the crucial image regions for correct
recognition, which we named as the golden key of image. Then the local extract
the local features from the captured crucial image regions. Finally, the
overall features and local features are input into the classifier and
dynamically weighted using the learnable voting parameters to collaboratively
complete the final recognition under limited training samples. The model
soundness experiments demonstrate the effectiveness of our method through the
improvement of feature distribution and recognition probability. The
experimental results and comparisons on MSTAR and OPENSAR show that our method
has achieved superior recognition performance
SAR ATR Method with Limited Training Data via an Embedded Feature Augmenter and Dynamic Hierarchical-Feature Refiner
Without sufficient data, the quantity of information available for supervised
training is constrained, as obtaining sufficient synthetic aperture radar (SAR)
training data in practice is frequently challenging. Therefore, current SAR
automatic target recognition (ATR) algorithms perform poorly with limited
training data availability, resulting in a critical need to increase SAR ATR
performance. In this study, a new method to improve SAR ATR when training data
are limited is proposed. First, an embedded feature augmenter is designed to
enhance the extracted virtual features located far away from the class center.
Based on the relative distribution of the features, the algorithm pulls the
corresponding virtual features with different strengths toward the
corresponding class center. The designed augmenter increases the amount of
information available for supervised training and improves the separability of
the extracted features. Second, a dynamic hierarchical-feature refiner is
proposed to capture the discriminative local features of the samples. Through
dynamically generated kernels, the proposed refiner integrates the
discriminative local features of different dimensions into the global features,
further enhancing the inner-class compactness and inter-class separability of
the extracted features. The proposed method not only increases the amount of
information available for supervised training but also extracts the
discriminative features from the samples, resulting in superior ATR performance
in problems with limited SAR training data. Experimental results on the moving
and stationary target acquisition and recognition (MSTAR), OpenSARShip, and
FUSAR-Ship benchmark datasets demonstrate the robustness and outstanding ATR
performance of the proposed method in response to limited SAR training data
Semi-Supervised SAR ATR Framework with Transductive Auxiliary Segmentation
Convolutional neural networks (CNNs) have achieved high performance in
synthetic aperture radar (SAR) automatic target recognition (ATR). However, the
performance of CNNs depends heavily on a large amount of training data. The
insufficiency of labeled training SAR images limits the recognition performance
and even invalidates some ATR methods. Furthermore, under few labeled training
data, many existing CNNs are even ineffective. To address these challenges, we
propose a Semi-supervised SAR ATR Framework with transductive Auxiliary
Segmentation (SFAS). The proposed framework focuses on exploiting the
transductive generalization on available unlabeled samples with an auxiliary
loss serving as a regularizer. Through auxiliary segmentation of unlabeled SAR
samples and information residue loss (IRL) in training, the framework can
employ the proposed training loop process and gradually exploit the information
compilation of recognition and segmentation to construct a helpful inductive
bias and achieve high performance. Experiments conducted on the MSTAR dataset
have shown the effectiveness of our proposed SFAS for few-shot learning. The
recognition performance of 94.18\% can be achieved under 20 training samples in
each class with simultaneous accurate segmentation results. Facing variances of
EOCs, the recognition ratios are higher than 88.00\% when 10 training samples
each class
SAR Ship Target Recognition Via Multi-Scale Feature Attention and Adaptive-Weighed Classifier
Maritime surveillance is indispensable for civilian fields, including
national maritime safeguarding, channel monitoring, and so on, in which
synthetic aperture radar (SAR) ship target recognition is a crucial research
field. The core problem to realizing accurate SAR ship target recognition is
the large inner-class variance and inter-class overlap of SAR ship features,
which limits the recognition performance. Most existing methods plainly extract
multi-scale features of the network and utilize equally each feature scale in
the classification stage. However, the shallow multi-scale features are not
discriminative enough, and each scale feature is not equally effective for
recognition. These factors lead to the limitation of recognition performance.
Therefore, we proposed a SAR ship recognition method via multi-scale feature
attention and adaptive-weighted classifier to enhance features in each scale,
and adaptively choose the effective feature scale for accurate recognition. We
first construct an in-network feature pyramid to extract multi-scale features
from SAR ship images. Then, the multi-scale feature attention can extract and
enhance the principal components from the multi-scale features with more
inner-class compactness and inter-class separability. Finally, the adaptive
weighted classifier chooses the effective feature scales in the feature pyramid
to achieve the final precise recognition. Through experiments and comparisons
under OpenSARship data set, the proposed method is validated to achieve
state-of-the-art performance for SAR ship recognition
SAR ATR under Limited Training Data Via MobileNetV3
In recent years, deep learning has been widely used to solve the bottleneck
problem of synthetic aperture radar (SAR) automatic target recognition (ATR).
However, most current methods rely heavily on a large number of training
samples and have many parameters which lead to failure under limited training
samples. In practical applications, the SAR ATR method needs not only superior
performance under limited training data but also real-time performance.
Therefore, we try to use a lightweight network for SAR ATR under limited
training samples, which has fewer parameters, less computational effort, and
shorter inference time than normal networks. At the same time, the lightweight
network combines the advantages of existing lightweight networks and uses a
combination of MnasNet and NetAdapt algorithms to find the optimal neural
network architecture for a given problem. Through experiments and comparisons
under the moving and stationary target acquisition and recognition (MSTAR)
dataset, the lightweight network is validated to have excellent recognition
performance for SAR ATR on limited training samples and be very computationally
small, reflecting the great potential of this network structure for practical
applications.Comment: 6 pages, 3 figures, published in 2023 IEEE Radar Conference
(RadarConf23
An Entropy-Awareness Meta-Learning Method for SAR Open-Set ATR
Existing synthetic aperture radar automatic target recognition (SAR ATR)
methods have been effective for the classification of seen target classes.
However, it is more meaningful and challenging to distinguish the unseen target
classes, i.e., open set recognition (OSR) problem, which is an urgent problem
for the practical SAR ATR. The key solution of OSR is to effectively establish
the exclusiveness of feature distribution of known classes. In this letter, we
propose an entropy-awareness meta-learning method that improves the
exclusiveness of feature distribution of known classes which means our method
is effective for not only classifying the seen classes but also encountering
the unseen other classes. Through meta-learning tasks, the proposed method
learns to construct a feature space of the dynamic-assigned known classes. This
feature space is required by the tasks to reject all other classes not
belonging to the known classes. At the same time, the proposed
entropy-awareness loss helps the model to enhance the feature space with
effective and robust discrimination between the known and unknown classes.
Therefore, our method can construct a dynamic feature space with discrimination
between the known and unknown classes to simultaneously classify the
dynamic-assigned known classes and reject the unknown classes. Experiments
conducted on the moving and stationary target acquisition and recognition
(MSTAR) dataset have shown the effectiveness of our method for SAR OSR
- …