146 research outputs found
Parallel, distributed and GPU computing technologies in single-particle electron microscopy
An introduction to the current paradigm shift towards concurrency in software
High-performance blob-based iterative three-dimensional reconstruction in electron tomography using multi-GPUs
<p>Abstract</p> <p>Background</p> <p>Three-dimensional (3D) reconstruction in electron tomography (ET) has emerged as a leading technique to elucidate the molecular structures of complex biological specimens. Blob-based iterative methods are advantageous reconstruction methods for 3D reconstruction in ET, but demand huge computational costs. Multiple graphic processing units (multi-GPUs) offer an affordable platform to meet these demands. However, a synchronous communication scheme between multi-GPUs leads to idle GPU time, and a weighted matrix involved in iterative methods cannot be loaded into GPUs especially for large images due to the limited available memory of GPUs.</p> <p>Results</p> <p>In this paper we propose a multilevel parallel strategy combined with an asynchronous communication scheme and a blob-ELLR data structure to efficiently perform blob-based iterative reconstructions on multi-GPUs. The asynchronous communication scheme is used to minimize the idle GPU time so as to asynchronously overlap communications with computations. The blob-ELLR data structure only needs nearly 1/16 of the storage space in comparison with ELLPACK-R (ELLR) data structure and yields significant acceleration.</p> <p>Conclusions</p> <p>Experimental results indicate that the multilevel parallel scheme combined with the asynchronous communication scheme and the blob-ELLR data structure allows efficient implementations of 3D reconstruction in ET on multi-GPUs.</p
Aceleración de algoritmos de procesamiento de imágenes para el análisis de partículas individuales con microscopia electrónica
Tesis Doctoral inédita cotutelada por la Masaryk University (República Checa) y la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 24-10-2022Cryogenic Electron Microscopy (Cryo-EM) is a vital field in current structural biology. Unlike X-ray
crystallography and Nuclear Magnetic Resonance, it can be used to analyze membrane proteins and
other samples with overlapping spectral peaks. However, one of the significant limitations of Cryo-EM
is the computational complexity. Modern electron microscopes can produce terabytes of data per single
session, from which hundreds of thousands of particles must be extracted and processed to obtain a
near-atomic resolution of the original sample. Many existing software solutions use high-Performance
Computing (HPC) techniques to bring these computations to the realm of practical usability. The
common approach to acceleration is parallelization of the processing, but in praxis, we face many
complications, such as problem decomposition, data distribution, load scheduling, balancing, and
synchronization. Utilization of various accelerators further complicates the situation, as heterogeneous
hardware brings additional caveats, for example, limited portability, under-utilization due to synchronization,
and sub-optimal code performance due to missing specialization.
This dissertation, structured as a compendium of articles, aims to improve the algorithms used
in Cryo-EM, esp. the SPA (Single Particle Analysis). We focus on the single-node performance
optimizations, using the techniques either available or developed in the HPC field, such as heterogeneous
computing or autotuning, which potentially needs the formulation of novel algorithms. The
secondary goal of the dissertation is to identify the limitations of state-of-the-art HPC techniques. Since
the Cryo-EM pipeline consists of multiple distinct steps targetting different types of data, there is no
single bottleneck to be solved. As such, the presented articles show a holistic approach to performance
optimization.
First, we give details on the GPU acceleration of the specific programs. The achieved speedup is
due to the higher performance of the GPU, adjustments of the original algorithm to it, and application
of the novel algorithms. More specifically, we provide implementation details of programs for movie
alignment, 2D classification, and 3D reconstruction that have been sped up by order of magnitude
compared to their original multi-CPU implementation or sufficiently the be used on-the-fly. In addition
to these three programs, multiple other programs from an actively used, open-source software package
XMIPP have been accelerated and improved.
Second, we discuss our contribution to HPC in the form of autotuning. Autotuning is the ability of
software to adapt to a changing environment, i.e., input or executing hardware. Towards that goal, we
present cuFFTAdvisor, a tool that proposes and, through autotuning, finds the best configuration of the
cuFFT library for given constraints of input size and plan settings. We also introduce a benchmark set
of ten autotunable kernels for important computational problems implemented in OpenCL or CUDA,
together with the introduction of complex dynamic autotuning to the KTT tool.
Third, we propose an image processing framework Umpalumpa, which combines a task-based
runtime system, data-centric architecture, and dynamic autotuning. The proposed framework allows for
writing complex workflows which automatically use available HW resources and adjust to different HW
and data but at the same time are easy to maintainThe project that gave rise to these results received the support of a fellowship from the “la Caixa”
Foundation (ID 100010434). The fellowship code is LCF/BQ/DI18/11660021.
This project has received funding from the European Union’s Horizon 2020 research and innovation
programme under the Marie Skłodowska-Curie grant agreement No. 71367
Characterising the Multi-Scale Properties of Flocculated Sediment by X-ray and Focused Ion Beam Nano-Tomography
PhDThe hydrodynamic behaviour of fine suspended aqueous sediments, and stability of the
bedforms they create once settled, are governed by the physical properties (e.g., size, shape,
porosity and density) of the flocculated particles in suspension (flocs). Consequently,
accurate prediction of the transport and fate of sediments and of the nutrients and pollutants
they carry depends on our ability to characterise aqueous flocs. Current research primarily
focuses on characterising flocs based on their external gross-scale (>1 μm) properties (e.g.,
gross morphology, size and settling velocity) using in situ techniques such as photography
and videography. Whilst these techniques provide valuable information regarding the
outward behaviour of flocculated sediment (i.e. transport and settling), difficulties associated
with extracting 3D geometries from 2D projections raises concerns regarding their accuracy
and key parameters such as density can only be estimated. In addition, they neglect to inform
on the internal micro- and nano-scale structure of flocs, responsible for much of their
behaviour and development. Transmission electron microscope (TEM) and environmental
electron microscope may be used to obtain nano-scale information in, essentially, 2D but
there is a large scale gap between this information and the macro-scale of optical techniques.
To address this issue this study uses 3D tomographic imaging over a range of spatial
scales. Whilst commonly used in materials science and the life sciences, correlative
tomography has yet to be applied in the environmental sciences. Threading together 3D Xray
micro-computed tomography (X-ray μCT) and focused ion beam nano-tomography (FIBnt)
with 2D TEM makes material characterisation from the centimetre to nanometre-scale
possible. Here, this correlative imaging strategy is combined with a non-destructive
stabilisation procedure and applied to the investigation of flocculated estuarine sediment,
enabling the multi length-scale properties of flocs to be accurately described for the first time.
This work has demonstrated that delicate aqueous flocs can be successfully stabilised
via a resin embedding process and contrasted for both electron microscopy and X-ray
tomography imaging. The 3D information obtained can be correlated across all length-scales
from nm to mm revealing new information about the structure and morphology of flocs. A
new system of characterising floc structure can be defined based on the association of
particles and their stability in the structure rather than simply their size. This new model
refutes the postulate that floc structures are fractal in nature.Engineering and Physical Sciences Research Council (EPSRC)
Queen Mary University London (through the Post Graduate Research Fund)
Environment Canad
- …