50 research outputs found
Scalability Analysis of Parallel GMRES Implementations
Applications involving large sparse nonsymmetric linear systems encourage parallel implementations of robust iterative solution methods, such as GMRES(k). Two parallel versions of GMRES(k) based on different data distributions and using Householder reflections in the orthogonalization phase, and variations of these which adapt the restart value k, are analyzed with respect to scalability (their ability to maintain fixed efficiency with an increase in problem size and number of processors).A theoretical algorithm-machine model for scalability is derived and validated by experiments on three parallel computers, each with different machine characteristics
N3LO NN interaction adjusted to light nuclei in ab exitu approach
We use phase-equivalent transformations to adjust off-shell properties of
similarity renormalization group evolved chiral effective field theory NN
interaction (Idaho N3LO) to fit selected binding energies and spectra of light
nuclei in an ab exitu approach. We then test the transformed interaction on a
set of additional observables in light nuclei to verify that it provides
reasonable descriptions of these observables with an apparent reduced need for
three- and many-nucleon interactions.Comment: Revised text due to journal referee comments. 6 pages, 2 figure
Performance Modeling and Analysis of a Massively Parallel DIRECT— Part 2
Modeling and analysis techniques are used to investigate
the performance of a massively parallel version
of DIRECT, a global search algorithm widely used
in multidisciplinary design optimization applications.
Several highdimensional
benchmark functions and
real world problems are used to test the design
effectiveness under various problem structures. In
this second part of a twopart
work, theoretical and
experimental results are compared for two parallel
clusters with different system scale and network
connectivity. The first part studied performance
sensitivity to important parameters for problem configurations
and parallel schemes, using performance
metrics such as memory usage, load balancing,
and parallel efficiency. Here linear regression models
are used to characterize two major overhead
sources—interprocessor communication and processor
idleness—and also applied to the isoefficiency
functions in scalability analysis. For a variety of
highdimensional
problems and large scale systems,
the massively parallel design has achieved reasonable
performance. The results of the performance
study provide guidance for efficient problem and
scheme configuration. More importantly, the design
considerations and analysis techniques generalize to
the transformation of other global search algorithms
into effective large scale parallel optimization tools
Performance Modeling and Analysis of a Massively Parallel DIRECT— Part 1
Modeling and analysis techniques are used to investigate
the performance of a massively parallel version
of DIRECT, a global search algorithm widely used
in multidisciplinary design optimization applications.
Several highdimensional
benchmark functions and
real world problems are used to test the design effectiveness
under various problem structures. Theoretical
and experimental results are compared for two
parallel clusters with different system scale and network
connectivity. The present work aims at studying
the performance sensitivity to important parameters
for problem configurations, parallel schemes,
and system settings. The performance metrics
include the memory usage, load balancing, parallel
efficiency, and scalability. An analytical bounding
model is constructed to measure the load balancing
performance under different schemes. Additionally,
linear regression models are used to characterize
two major overhead sources—interprocessor communication
and processor idleness, and also applied
to the isoefficiency functions in scalability analysis.
For a variety of highdimensional
problems and large
scale systems, the massively parallel design has
achieved reasonable performance. The results of
the performance study provide guidance for efficient
problem and scheme configuration. More importantly,
the generalized design considerations and
analysis techniques are beneficial for transforming
many global search algorithms to become effective
large scale parallel optimization tools
Collective Modes in Light Nuclei from First Principles
Results for ab initio no-core shell model calculations in a symmetry-adapted
SU(3)-based coupling scheme demonstrate that collective modes in light nuclei
emerge from first principles. The low-lying states of 6Li, 8Be, and 6He are
shown to exhibit orderly patterns that favor spatial configurations with strong
quadrupole deformation and complementary low intrinsic spin values, a picture
that is consistent with the nuclear symplectic model. The results also suggest
a pragmatic path forward to accommodate deformation-driven collective features
in ab initio analyses when they dominate the nuclear landscape.Comment: 5 pages 3 figures, accepted to Physical Review Letter
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
We report on the computational characteristics of ab initio nuclear structure
calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We
examine the computational complexity of the current implementation of the
SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and
12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We
demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel
methods for computing the many-body matrix elements. Results compare favorably
with complete model space calculations and significant memory savings are
achieved in physically important applications. In particular, a well-chosen
symmetry-adapted basis affords memory savings in calculations of states with a
fixed total angular momentum in large model spaces while exactly preserving
translational invariance.Comment: 11 pages, 8 figure
{\it Ab initio} nuclear structure - the large sparse matrix eigenvalue problem
The structure and reactions of light nuclei represent fundamental and
formidable challenges for microscopic theory based on realistic strong
interaction potentials. Several {\it ab initio} methods have now emerged that
provide nearly exact solutions for some nuclear properties. The {\it ab initio}
no core shell model (NCSM) and the no core full configuration (NCFC) method,
frame this quantum many-particle problem as a large sparse matrix eigenvalue
problem where one evaluates the Hamiltonian matrix in a basis space consisting
of many-fermion Slater determinants and then solves for a set of the lowest
eigenvalues and their associated eigenvectors. The resulting eigenvectors are
employed to evaluate a set of experimental quantities to test the underlying
potential. For fundamental problems of interest, the matrix dimension often
exceeds and the number of nonzero matrix elements may saturate
available storage on present-day leadership class facilities. We survey recent
results and advances in solving this large sparse matrix eigenvalue problem. W
also outline the challenges that lie ahead for achieving further breakthroughs
in fundamental nuclear theory using these {\it ab initio} approaches.Comment: SciDAC2009 invited paper; 10 pages and 10 figure
Recommended from our members
VarSight: prioritizing clinically reported variants with binary classification algorithms.
BackgroundWhen applying genomic medicine to a rare disease patient, the primary goal is to identify one or more genomic variants that may explain the patient's phenotypes. Typically, this is done through annotation, filtering, and then prioritization of variants for manual curation. However, prioritization of variants in rare disease patients remains a challenging task due to the high degree of variability in phenotype presentation and molecular source of disease. Thus, methods that can identify and/or prioritize variants to be clinically reported in the presence of such variability are of critical importance.MethodsWe tested the application of classification algorithms that ingest variant annotations along with phenotype information for predicting whether a variant will ultimately be clinically reported and returned to a patient. To test the classifiers, we performed a retrospective study on variants that were clinically reported to 237 patients in the Undiagnosed Diseases Network.ResultsWe treated the classifiers as variant prioritization systems and compared them to four variant prioritization algorithms and two single-measure controls. We showed that the trained classifiers outperformed all other tested methods with the best classifiers ranking 72% of all reported variants and 94% of reported pathogenic variants in the top 20.ConclusionsWe demonstrated how freely available binary classification algorithms can be used to prioritize variants even in the presence of real-world variability. Furthermore, these classifiers outperformed all other tested methods, suggesting that they may be well suited for working with real rare disease patient datasets
Down-regulation of ABCC11 protein (MRP8) in human breast cancer
Aim of this article is to investigate the expression of ABCC11 (MRP8) protein in normal breast tissue, and examine the difference in ABCC11 mRNA and protein expression between normal breast and breast cancer tissues taking into account ABCC11 genotype (a functional SNP, rs17822931) and estrogen receptor (ER) status