828 research outputs found
Improving Performance of Iterative Methods by Lossy Checkponting
Iterative methods are commonly used approaches to solve large, sparse linear
systems, which are fundamental operations for many modern scientific
simulations. When the large-scale iterative methods are running with a large
number of ranks in parallel, they have to checkpoint the dynamic variables
periodically in case of unavoidable fail-stop errors, requiring fast I/O
systems and large storage space. To this end, significantly reducing the
checkpointing overhead is critical to improving the overall performance of
iterative methods. Our contribution is fourfold. (1) We propose a novel lossy
checkpointing scheme that can significantly improve the checkpointing
performance of iterative methods by leveraging lossy compressors. (2) We
formulate a lossy checkpointing performance model and derive theoretically an
upper bound for the extra number of iterations caused by the distortion of data
in lossy checkpoints, in order to guarantee the performance improvement under
the lossy checkpointing scheme. (3) We analyze the impact of lossy
checkpointing (i.e., extra number of iterations caused by lossy checkpointing
files) for multiple types of iterative methods. (4)We evaluate the lossy
checkpointing scheme with optimal checkpointing intervals on a high-performance
computing environment with 2,048 cores, using a well-known scientific
computation package PETSc and a state-of-the-art checkpoint/restart toolkit.
Experiments show that our optimized lossy checkpointing scheme can
significantly reduce the fault tolerance overhead for iterative methods by
23%~70% compared with traditional checkpointing and 20%~58% compared with
lossless-compressed checkpointing, in the presence of system failures.Comment: 14 pages, 10 figures, HPDC'1
Gaia in-orbit realignment. Overview and data analysis
The ESA Gaia spacecraft has two Shack-Hartmann wavefront sensors (WFS) on its
focal plane. They are required to refocus the telescope in-orbit due to launch
settings and gravity release. They require bright stars to provide good signal
to noise patterns. The centroiding precision achievable poses a limit on the
minimum stellar brightness required and, ultimately, on the observing time
required to reconstruct the wavefront. Maximum likelihood algorithms have been
developed at the Gaia SOC. They provide optimum performance according to the
Cr\'amer-Rao lower bound. Detailed wavefront reconstruction procedures, dealing
with partial telescope pupil sampling and partial microlens illumination have
also been developed. In this work, a brief overview of the WFS and an in depth
description of the centroiding and wavefront reconstruction algorithms is
provided.Comment: 14 pages, 6 figures, 2 tables, proceedings of SPIE Astronomical
Telescopes + Instrumentation 2012 Conference 8442 (1-6 July 2012
The Global sphere reconstruction (GSR) - Demonstrating an independent implementation of the astrometric core solution for Gaia
Context. The Gaia ESA mission will estimate the astrometric and physical data
of more than one billion objects, providing the largest and most precise
catalog of absolute astrometry in the history of Astronomy. The core of this
process, the so-called global sphere reconstruction, is represented by the
reduction of a subset of these objects which will be used to define the
celestial reference frame. As the Hipparcos mission showed, and as is inherent
to all kinds of absolute measurements, possible errors in the data reduction
can hardly be identified from the catalog, thus potentially introducing
systematic errors in all derived work. Aims. Following up on the lessons
learned from Hipparcos, our aim is thus to develop an independent sphere
reconstruction method that contributes to guarantee the quality of the
astrometric results without fully reproducing the main processing chain.
Methods. Indeed, given the unfeasibility of a complete replica of the data
reduction pipeline, an astrometric verification unit (AVU) was instituted by
the Gaia Data Processing and Analysis Consortium (DPAC). One of its jobs is to
implement and operate an independent global sphere reconstruction (GSR),
parallel to the baseline one (AGIS, namely Astrometric Global Iterative
Solution) but limited to the primary stars and for validation purposes, to
compare the two results, and to report on any significant differences. Results.
Tests performed on simulated data show that GSR is able to reproduce at the
sub-as level the results of the AGIS demonstration run presented in
Lindegren et al. (2012). Conclusions. Further development is ongoing to improve
on the treatment of real data and on the software modules that compare the AGIS
and GSR solutions to identify possible discrepancies above the tolerance level
set by the accuracy of the Gaia catalog.Comment: Accepted for publication on Astronomy & Astrophysic
Asteroid Models from Multiple Data Sources
In the past decade, hundreds of asteroid shape models have been derived using
the lightcurve inversion method. At the same time, a new framework of 3-D shape
modeling based on the combined analysis of widely different data sources such
as optical lightcurves, disk-resolved images, stellar occultation timings,
mid-infrared thermal radiometry, optical interferometry, and radar
delay-Doppler data, has been developed. This multi-data approach allows the
determination of most of the physical and surface properties of asteroids in a
single, coherent inversion, with spectacular results. We review the main
results of asteroid lightcurve inversion and also recent advances in multi-data
modeling. We show that models based on remote sensing data were confirmed by
spacecraft encounters with asteroids, and we discuss how the multiplication of
highly detailed 3-D models will help to refine our general knowledge of the
asteroid population. The physical and surface properties of asteroids, i.e.,
their spin, 3-D shape, density, thermal inertia, surface roughness, are among
the least known of all asteroid properties. Apart for the albedo and diameter,
we have access to the whole picture for only a few hundreds of asteroids. These
quantities are nevertheless very important to understand as they affect the
non-gravitational Yarkovsky effect responsible for meteorite delivery to Earth,
or the bulk composition and internal structure of asteroids.Comment: chapter that will appear in a Space Science Series book Asteroids I
The astrometric core solution for the Gaia mission. Overview of models, algorithms and software implementation
The Gaia satellite will observe about one billion stars and other point-like
sources. The astrometric core solution will determine the astrometric
parameters (position, parallax, and proper motion) for a subset of these
sources, using a global solution approach which must also include a large
number of parameters for the satellite attitude and optical instrument. The
accurate and efficient implementation of this solution is an extremely
demanding task, but crucial for the outcome of the mission. We provide a
comprehensive overview of the mathematical and physical models applicable to
this solution, as well as its numerical and algorithmic framework. The
astrometric core solution is a simultaneous least-squares estimation of about
half a billion parameters, including the astrometric parameters for some 100
million well-behaved so-called primary sources. The global nature of the
solution requires an iterative approach, which can be broken down into a small
number of distinct processing blocks (source, attitude, calibration and global
updating) and auxiliary processes (including the frame rotator and selection of
primary sources). We describe each of these processes in some detail, formulate
the underlying models, from which the observation equations are derived, and
outline the adopted numerical solution methods with due consideration of
robustness and the structure of the resulting system of equations. Appendices
provide brief introductions to some important mathematical tools (quaternions
and B-splines for the attitude representation, and a modified Cholesky
algorithm for positive semidefinite problems) and discuss some complications
expected in the real mission data.Comment: 48 pages, 19 figures. Accepted for publication in Astronomy &
Astrophysic
The MPI + CUDA Gaia AVU-GSR Parallel Solver Toward Next-generation Exascale Infrastructures
We ported to the GPU with CUDA the Astrometric Verification Unit-Global
Sphere Reconstruction (AVU-GSR) Parallel Solver developed for the ESA Gaia
mission, by optimizing a previous OpenACC porting of this application. The code
aims to find, with a [10,100]as precision, the astrometric parameters of
stars, the attitude and instrumental settings of the Gaia
satellite, and the global parameter of the parametrized Post-Newtonian
formalism, by solving a system of linear equations, , with the
LSQR iterative algorithm. The coefficient matrix of the final Gaia dataset
is large, with elements, and sparse, reaching a
size of 10-100 TB, typical for the Big Data analysis, which requires an
efficient parallelization to obtain scientific results in reasonable
timescales. The speedup of the CUDA code over the original AVU-GSR solver,
parallelized on the CPU with MPI+OpenMP, increases with the system size and the
number of resources, reaching a maximum of 14x, >9x over the OpenACC
application. This result is obtained by comparing the two codes on the CINECA
cluster Marconi100, with 4 V100 GPUs per node. After verifying the agreement
between the solutions of a set of systems with different sizes computed with
the CUDA and the OpenMP codes and that the solutions showed the required
precision, the CUDA code was put in production on Marconi100, essential for an
optimal AVU-GSR pipeline and the successive Gaia Data Releases. This analysis
represents a first step to understand the (pre-)Exascale behavior of a class of
applications that follow the same structure of this code. In the next months,
we plan to run this code on the pre-Exascale platform Leonardo of CINECA, with
4 next-generation A200 GPUs per node, toward a porting on this infrastructure,
where we expect to obtain even higher performances.Comment: 17 pages, 4 figures, 1 table, published on 1st August 2023 in
Publications of the Astronomical Society of the Pacific, 135, 07450
ELLC - a fast, flexible light curve model for detached eclipsing binary stars and transiting exoplanets
Very high quality light curves are now available for thousands of detached
eclipsing binary stars and transiting exoplanet systems as a result of surveys
for transiting exoplanets and other large-scale photometric surveys. I have
developed a binary star model (ELLC) that can be used to analyse the light
curves of detached eclipsing binary stars and transiting exoplanet systems that
is fast and accurate, and that can include the effects of star spots, Doppler
boosting and light-travel time within binaries with eccentric orbits. The model
represents the stars as triaxial ellipsoids. The apparent flux from the binary
is calculated using Gauss-Legendre integration over the ellipses that are the
projection of these ellipsoids on the sky. The model can also be used to
calculate the flux-weighted radial velocity of the stars during an eclipse
(Rossiter-McLaughlin effect). The main features of the model have been tested
by comparison to observed data and other light curve models. The model is found
to be accurate enough to analyse the very high quality photometry that is now
available from space-spaced instruments, flexible enough to model a wide range
of eclipsing binary stars and extrasolar planetary systems, and fast enough to
enable the use of modern Monte Carlo methods for data analysis and model
testing.Comment: Accepted for publication in A&A. Source code available from
pypi.python.org/pypi/ellc. Definition of "third-light" changed from version
ellc-1.0.0 to ellc-1.1.0 - this preprint describes the definition used in the
later versio
- …