24,701 research outputs found

### Parallel Implementation of the PHOENIX Generalized Stellar Atmosphere Program. II: Wavelength Parallelization

We describe an important addition to the parallel implementation of our
generalized NLTE stellar atmosphere and radiative transfer computer program
PHOENIX. In a previous paper in this series we described data and task parallel
algorithms we have developed for radiative transfer, spectral line opacity, and
NLTE opacity and rate calculations. These algorithms divided the work spatially
or by spectral lines, that is distributing the radial zones, individual
spectral lines, or characteristic rays among different processors and employ,
in addition task parallelism for logically independent functions (such as
atomic and molecular line opacities). For finite, monotonic velocity fields,
the radiative transfer equation is an initial value problem in wavelength, and
hence each wavelength point depends upon the previous one. However, for
sophisticated NLTE models of both static and moving atmospheres needed to
accurately describe, e.g., novae and supernovae, the number of wavelength
points is very large (200,000--300,000) and hence parallelization over
wavelength can lead both to considerable speedup in calculation time and the
ability to make use of the aggregate memory available on massively parallel
supercomputers. Here, we describe an implementation of a pipelined design for
the wavelength parallelization of PHOENIX, where the necessary data from the
processor working on a previous wavelength point is sent to the processor
working on the succeeding wavelength point as soon as it is known. Our
implementation uses a MIMD design based on a relatively small number of
standard MPI library calls and is fully portable between serial and parallel
computers.Comment: AAS-TeX, 15 pages, full text with figures available at
ftp://calvin.physast.uga.edu/pub/preprints/Wavelength-Parallel.ps.gz ApJ, in
pres

### A 3D radiative transfer framework: IV. spherical & cylindrical coordinate systems

We extend our framework for 3D radiative transfer calculations with a
non-local operator splitting methods along (full) characteristics to spherical
and cylindrical coordinate systems. These coordinate systems are better suited
to a number of physical problems than Cartesian coordinates. The scattering
problem for line transfer is solved via means of an operator splitting (OS)
technique. The formal solution is based on a full characteristics method. The
approximate $\Lambda$ operator is constructed considering nearest neighbors
exactly. The code is parallelized over both wavelength and solid angle using
the MPI library. We present the results of several test cases with different
values of the thermalization parameter for the different coordinate systems.
The results are directly compared to 1D plane parallel tests. The 3D results
agree very well with the well-tested 1D calculations.Comment: A&A, in pres

### Parallel Implementation of the PHOENIX Generalized Stellar Atmosphere Program

We describe the parallel implementation of our generalized stellar atmosphere
and NLTE radiative transfer computer program PHOENIX. We discuss the parallel
algorithms we have developed for radiative transfer, spectral line opacity, and
NLTE opacity and rate calculations. Our implementation uses a MIMD design based
on a relatively small number of MPI library calls. We report the results of
test calculations on a number of different parallel computers and discuss the
results of scalability tests.Comment: To appear in ApJ, 1997, vol 483. LaTeX, 34 pages, 3 Figures, uses
AASTeX macros and styles natbib.sty, and psfig.st

### A 3D radiative transfer framework: XI. multi-level NLTE

Multi-level non-local thermodynamic equilibrium (NLTE) radiation transfer
calculations have become standard throughout the stellar atmospheres community
and are applied to all types of stars as well as dynamical systems such as
novae and supernovae. Even today spherically symmetric 1D calculations with
full physics are computationally intensive. We show that full NLTE calculations
can be done with fully 3 dimensional (3D) radiative transfer. With modern
computational techniques and current massive parallel computational resources,
full detailed solution of the multi-level NLTE problem coupled to the solution
of the radiative transfer scattering problem can be solved without sacrificing
the micro physics description. We extend the use of a rate operator developed
to solve the coupled NLTE problem in spherically symmetric 1D systems. In order
to spread memory among processors we have implemented the NLTE/3D module with a
hierarchical domain decomposition method that distributes the NLTE levels,
radiative rates, and rate operator data over a group of processes so that each
process only holds the data for a fraction of the voxels. Each process in a
group holds all the relevant data to participate in the solution of the 3DRT
problem so that the 3DRT solution is parallelized within a domain decomposition
group. We solve a spherically symmetric system in 3D spherical coordinates in
order to directly compare our well-tested 1D code to the 3D case. We compare
three levels of tests: a) a simple H+He test calculation, b) H+He+CNO+Mg, c)
H+He+Fe. The last test is computationally large and shows that realistic
astrophysical problems are solvable now, but they do require significant
computational resources. With presently available computational resources it is
possible to solve the full 3D multi-level problem with the same detailed
micro-physics as included in 1D modeling.Comment: 20 pages, 14 figures, A&A, in pres

### A 3D radiative transfer framework: III. periodic boundary conditions

We present a general method to solve radiative transfer problems including
scattering in the continuum as well as in lines in 3D configurations with
periodic boundary conditions. he scattering problem for line transfer is solved
via means of an operator splitting (OS) technique. The formal solution is based
on a full characteristics method. The approximate $\Lambda$ operator is
constructed considering nearest neighbors exactly. The code is parallelized
over both wavelength and solid angle using the MPI library. We present the
results of several test cases with different values of the thermalization
parameter and two choices for the temperature structure. The results are
directly compared to 1D plane parallel tests. The 3D results agree very well
with the well-tested 1D calculations.Comment: A&A, in press, visualization figure omitted due to size, available at
ftp://phoenix.hs.uni-hamburg.de/preprints/3DRT_paper3.pd

### A 3D radiative transfer framework: VII. Arbitrary velocity fields in the Eulerian frame

A solution of the radiative-transfer problem in 3D with arbitrary velocity
fields in the Eulerian frame is presented. The method is implemented in our 3D
radiative transfer framework and used in the PHOENIX/3D code. It is tested by
comparison to our well- tested 1D co-moving frame radiative transfer code,
where the treatment of a monotonic velocity field is implemented in the
Lagrangian frame. The Eulerian formulation does not need much additional memory
and is useable on state-of-the-art computers, even large-scale applications
with 1000's of wavelength points are feasible

### Numerical Solution of the Expanding Stellar Atmosphere Problem

In this paper we discuss numerical methods and algorithms for the solution of
NLTE stellar atmosphere problems involving expanding atmospheres, e.g., found
in novae, supernovae and stellar winds. We show how a scheme of nested
iterations can be used to reduce the high dimension of the problem to a number
of problems with smaller dimensions. As examples of these sub-problems, we
discuss the numerical solution of the radiative transfer equation for
relativistically expanding media with spherical symmetry, the solution of the
multi-level non-LTE statistical equilibrium problem for extremely large model
atoms, and our temperature correction procedure. Although modern iteration
schemes are very efficient, parallel algorithms are essential in making large
scale calculations feasible, therefore we discuss some parallelization schemes
that we have developed.Comment: JCAM, in press. 28 pages, also available at
ftp://calvin.physast.uga.edu:/pub/preprints/CompAstro.ps.g

- â€¦