881 research outputs found
A short introduction to the aims and status of modern C++
C++20 is here. How does it reflect the aims and ideals of C++? What are the major
features of C++20? What is in the works for future standards? I will - necessarily briefly
- touch upon type-safety, resource-safety, type deduction, modularity, and concurrency.
After this introduction, the second half of the presentation will be a question and answer
session
XLOOPS -- A Program Package calculating One- and Two-Loop Feynman Diagrams
The aim of XLOOPS is to calculate one-particle irreducible Feynman diagrams
with one or two closed loops for arbitrary processes in the Standard model of
particles and related theories. Up to now this aim is realized for all one-loop
diagrams with at most three external lines and for two-loop diagrams with two
external lines.Comment: 84 pages, Postscript, program package and this manual also available
at http://wwwthep.physik.uni-mainz.de/~xloops/, minor changes and bug fixes
are included no
CMBEASY:: an Object Oriented Code for the Cosmic Microwave Background
We have ported the cmbfast package to the C++ programming language to produce
cmbeasy, an object oriented code for the cosmic microwave background. The code
is available at www.cmbeasy.org. We sketch the design of the new code,
emphasizing the benefits of object orientation in cosmology, which allow for
simple substitution of different cosmological models and gauges. Both gauge
invariant perturbations and quintessence support has been added to the code.
For ease of use, as well as for instruction, a graphical user interface is
available.Comment: 7 pages, 5 figures, matches published version, code at
http://www.cmbeasy.or
Improving performance and maintainability through refactoring in C++11
Abstraction based programming has been traditionally seen as an approach that improves software quality at the cost of losing performance. In this paper, we explore the cost of abstraction by transforming the PARSEC benchmark uidanimate application from low-level, hand-optimized C to a higher-level and more general C++ version that is a more direct representation of the algorithms. We eliminate global variables and constants, use vectors of a user-de ned particle type rather than vectors of built-in types, and separate the concurrency model from the application model. The result is a C++ program that is smaller, less complex, and measurably faster than the original. The benchmark was chosen to be representative of many applications and our transformations are systematic and based on principles. Consequently, our techniques can be used to improve the performance, exibility, and maintainability of a large class of programs. The handling of concurrency issues has been collected into a small new library, YAPL.J. Daniel Garcia's work was partially supported by FundaciĂłn CajaMadrid through their grant programme for Madrid University Professors. Bjarne Stroustrup's work was partially supported by NSF grant #083319
Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation
Despite the importance of sparse matrices in numerous fields of science,
software implementations remain difficult to use for non-expert users,
generally requiring the understanding of underlying details of the chosen
sparse matrix storage format. In addition, to achieve good performance, several
formats may need to be used in one program, requiring explicit selection and
conversion between the formats. This can be both tedious and error-prone,
especially for non-expert users. Motivated by these issues, we present a
user-friendly and open-source sparse matrix class for the C++ language, with a
high-level application programming interface deliberately similar to the widely
used MATLAB language. This facilitates prototyping directly in C++ and aids the
conversion of research code into production environments. The class internally
uses two main approaches to achieve efficient execution: (i) a hybrid storage
framework, which automatically and seamlessly switches between three underlying
storage formats (compressed sparse column, Red-Black tree, coordinate list)
depending on which format is best suited and/or available for specific
operations, and (ii) a template-based meta-programming framework to
automatically detect and optimise execution of common expression patterns.
Empirical evaluations on large sparse matrices with various densities of
non-zero elements demonstrate the advantages of the hybrid storage framework
and the expression optimisation mechanism.Comment: extended and revised version of an earlier conference paper
arXiv:1805.0338
A Novel Generic Framework for Track Fitting in Complex Detector Systems
This paper presents a novel framework for track fitting which is usable in a
wide range of experiments, independent of the specific event topology, detector
setup, or magnetic field arrangement. This goal is achieved through a
completely modular design. Fitting algorithms are implemented as
interchangeable modules. At present, the framework contains a validated Kalman
filter. Track parameterizations and the routines required to extrapolate the
track parameters and their covariance matrices through the experiment are also
implemented as interchangeable modules. Different track parameterizations and
extrapolation routines can be used simultaneously for fitting of the same
physical track. Representations of detector hits are the third modular
ingredient to the framework. The hit dimensionality and orientation of planar
tracking detectors are not restricted. Tracking information from detectors
which do not measure the passage of particles in a fixed physical detector
plane, e.g. drift chambers or TPCs, is used without any simplifications. The
concept is implemented in a light-weight C++ library called GENFIT, which is
available as free software
Development of an object-oriented finite element program: application to metal-forming and impact simulations
During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations
Synthetic LISA: Simulating Time Delay Interferometry in a Model LISA
We report on three numerical experiments on the implementation of Time-Delay
Interferometry (TDI) for LISA, performed with Synthetic LISA, a C++/Python
package that we developed to simulate the LISA science process at the level of
scientific and technical requirements. Specifically, we study the laser-noise
residuals left by first-generation TDI when the LISA armlengths have a
realistic time dependence; we characterize the armlength-measurements
accuracies that are needed to have effective laser-noise cancellation in both
first- and second-generation TDI; and we estimate the quantization and
telemetry bitdepth needed for the phase measurements. Synthetic LISA generates
synthetic time series of the LISA fundamental noises, as filtered through all
the TDI observables; it also provides a streamlined module to compute the TDI
responses to gravitational waves according to a full model of TDI, including
the motion of the LISA array and the temporal and directional dependence of the
armlengths. We discuss the theoretical model that underlies the simulation, its
implementation, and its use in future investigations on system characterization
and data-analysis prototyping for LISA.Comment: 18 pages, 14 EPS figures, REVTeX 4. Accepted PRD version. See
http://www.vallis.org/syntheticlisa for information on the Synthetic LISA
software packag
Spectral Line Removal in the LIGO Data Analysis System (LDAS)
High power in narrow frequency bands, spectral lines, are a feature of an
interferometric gravitational wave detector's output. Some lines are coherent
between interferometers, in particular, the 2 km and 4 km LIGO Hanford
instruments. This is of concern to data analysis techniques, such as the
stochastic background search, that use correlations between instruments to
detect gravitational radiation. Several techniques of `line removal' have been
proposed. Where a line is attributable to a measurable environmental
disturbance, a simple linear model may be fitted to predict, and subsequently
subtract away, that line. This technique has been implemented (as the command
oelslr) in the LIGO Data Analysis System (LDAS). We demonstrate its application
to LIGO S1 data.Comment: 11 pages, 5 figures, to be published in CQG GWDAW02 proceeding
Logarithmic growth dynamics in software networks
In a recent paper, Krapivsky and Redner (Phys. Rev. E, 71 (2005) 036118)
proposed a new growing network model with new nodes being attached to a
randomly selected node, as well to all ancestors of the target node. The model
leads to a sparse graph with an average degree growing logarithmically with the
system size. Here we present compeling evidence for software networks being the
result of a similar class of growing dynamics. The predicted pattern of network
growth, as well as the stationary in- and out-degree distributions are
consistent with the model. Our results confirm the view of large-scale software
topology being generated through duplication-rewiring mechanisms. Implications
of these findings are outlined.Comment: 7 pages, 3 figures, published in Europhysics Letters (2005
- …