291,460 research outputs found
Accelerating Cosmic Microwave Background map-making procedure through preconditioning
Estimation of the sky signal from sequences of time ordered data is one of
the key steps in Cosmic Microwave Background (CMB) data analysis, commonly
referred to as the map-making problem. Some of the most popular and general
methods proposed for this problem involve solving generalised least squares
(GLS) equations with non-diagonal noise weights given by a block-diagonal
matrix with Toeplitz blocks. In this work we study new map-making solvers
potentially suitable for applications to the largest anticipated data sets.
They are based on iterative conjugate gradient (CG) approaches enhanced with
novel, parallel, two-level preconditioners. We apply the proposed solvers to
examples of simulated non-polarised and polarised CMB observations, and a set
of idealised scanning strategies with sky coverage ranging from nearly a full
sky down to small sky patches. We discuss in detail their implementation for
massively parallel computational platforms and their performance for a broad
range of parameters characterising the simulated data sets. We find that our
best new solver can outperform carefully-optimised standard solvers used today
by a factor of as much as 5 in terms of the convergence rate and a factor of up
to in terms of the time to solution, and to do so without significantly
increasing the memory consumption and the volume of inter-processor
communication. The performance of the new algorithms is also found to be more
stable and robust, and less dependent on specific characteristics of the
analysed data set. We therefore conclude that the proposed approaches are well
suited to address successfully challenges posed by new and forthcoming CMB data
sets.Comment: 19 pages // Final version submitted to A&
Accelerating Cosmic Microwave Background map-making procedure through preconditioning
Estimation of the sky signal from sequences of time ordered data is one of
the key steps in Cosmic Microwave Background (CMB) data analysis, commonly
referred to as the map-making problem. Some of the most popular and general
methods proposed for this problem involve solving generalised least squares
(GLS) equations with non-diagonal noise weights given by a block-diagonal
matrix with Toeplitz blocks. In this work we study new map-making solvers
potentially suitable for applications to the largest anticipated data sets.
They are based on iterative conjugate gradient (CG) approaches enhanced with
novel, parallel, two-level preconditioners. We apply the proposed solvers to
examples of simulated non-polarised and polarised CMB observations, and a set
of idealised scanning strategies with sky coverage ranging from nearly a full
sky down to small sky patches. We discuss in detail their implementation for
massively parallel computational platforms and their performance for a broad
range of parameters characterising the simulated data sets. We find that our
best new solver can outperform carefully-optimised standard solvers used today
by a factor of as much as 5 in terms of the convergence rate and a factor of up
to in terms of the time to solution, and to do so without significantly
increasing the memory consumption and the volume of inter-processor
communication. The performance of the new algorithms is also found to be more
stable and robust, and less dependent on specific characteristics of the
analysed data set. We therefore conclude that the proposed approaches are well
suited to address successfully challenges posed by new and forthcoming CMB data
sets.Comment: 19 pages // Final version submitted to A&
Recent Advances in Graph Partitioning
We survey recent trends in practical algorithms for balanced graph
partitioning together with applications and future research directions
A survey on algorithmic aspects of modular decomposition
The modular decomposition is a technique that applies but is not restricted
to graphs. The notion of module naturally appears in the proofs of many graph
theoretical theorems. Computing the modular decomposition tree is an important
preprocessing step to solve a large number of combinatorial optimization
problems. Since the first polynomial time algorithm in the early 70's, the
algorithmic of the modular decomposition has known an important development.
This paper survey the ideas and techniques that arose from this line of
research
EPiK-a Workflow for Electron Tomography in Kepler.
Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archiving and sharing scientific workflows. Electron tomography (ET) enables high-resolution views of complex cellular structures, such as cytoskeletons, organelles, viruses and chromosomes. Imaging investigations produce large datasets. For instance, in Electron Tomography, the size of a 16 fold image tilt series is about 65 Gigabytes with each projection image including 4096 by 4096 pixels. When we use serial sections or montage technique for large field ET, the dataset will be even larger. For higher resolution images with multiple tilt series, the data size may be in terabyte range. Demands of mass data processing and complex algorithms require the integration of diverse codes into flexible software structures. This paper describes a workflow for Electron Tomography Programs in Kepler (EPiK). This EPiK workflow embeds the tracking process of IMOD, and realizes the main algorithms including filtered backprojection (FBP) from TxBR and iterative reconstruction methods. We have tested the three dimensional (3D) reconstruction process using EPiK on ET data. EPiK can be a potential toolkit for biology researchers with the advantage of logical viewing, easy handling, convenient sharing and future extensibility
Tree-based Coarsening and Partitioning of Complex Networks
Many applications produce massive complex networks whose analysis would
benefit from parallel processing. Parallel algorithms, in turn, often require a
suitable network partition. For solving optimization tasks such as graph
partitioning on large networks, multilevel methods are preferred in practice.
Yet, complex networks pose challenges to established multilevel algorithms, in
particular to their coarsening phase.
One way to specify a (recursive) coarsening of a graph is to rate its edges
and then contract the edges as prioritized by the rating. In this paper we (i)
define weights for the edges of a network that express the edges' importance
for connectivity, (ii) compute a minimum weight spanning tree with
respect to these weights, and (iii) rate the network edges based on the
conductance values of 's fundamental cuts. To this end, we also (iv)
develop the first optimal linear-time algorithm to compute the conductance
values of \emph{all} fundamental cuts of a given spanning tree. We integrate
the new edge rating into a leading multilevel graph partitioner and equip the
latter with a new greedy postprocessing for optimizing the maximum
communication volume (MCV). Experiments on bipartitioning frequently used
benchmark networks show that the postprocessing already reduces MCV by 11.3%.
Our new edge rating further reduces MCV by 10.3% compared to the previously
best rating with the postprocessing in place for both ratings. In total, with a
modest increase in running time, our new approach reduces the MCV of complex
network partitions by 20.4%
- …