556 research outputs found
Optical tomography using the SCIRun problem solving environment: Preliminary results for three-dimensional geometries and parallel processing
We present a 3D implementation of the UCL imaging package for absorption and scatter reconstruction from time-resolved data (TOAST), embedded in the SCIRun interactive simulation and visualization package developed at the University of Utah. SCIRun is a scientific programming environment that allows the interactive construction, debugging, and steering of large-scale scientific computations. While the capabilities of SCIRun's interactive approach are not yet fully exploited in the current TOAST implementation, an immediate benefit of the combined TOAST/SCIRun package is the availability of optimized parallel finite element forward solvers, and the use of SCIRun's existing 3D visualisation tools. A reconstruction of a segmented 3D head model is used as an example for demonstrating the capability of TOAST/SCIRun of simulating anatomically shaped meshes
AI-assisted Automated Workflow for Real-time X-ray Ptychography Data Analysis via Federated Resources
We present an end-to-end automated workflow that uses large-scale remote
compute resources and an embedded GPU platform at the edge to enable
AI/ML-accelerated real-time analysis of data collected for x-ray ptychography.
Ptychography is a lensless method that is being used to image samples through a
simultaneous numerical inversion of a large number of diffraction patterns from
adjacent overlapping scan positions. This acquisition method can enable
nanoscale imaging with x-rays and electrons, but this often requires very large
experimental datasets and commensurately high turnaround times, which can limit
experimental capabilities such as real-time experimental steering and
low-latency monitoring. In this work, we introduce a software system that can
automate ptychography data analysis tasks. We accelerate the data analysis
pipeline by using a modified version of PtychoNN -- an ML-based approach to
solve phase retrieval problem that shows two orders of magnitude speedup
compared to traditional iterative methods. Further, our system coordinates and
overlaps different data analysis tasks to minimize synchronization overhead
between different stages of the workflow. We evaluate our workflow system with
real-world experimental workloads from the 26ID beamline at Advanced Photon
Source and ThetaGPU cluster at Argonne Leadership Computing Resources.Comment: 7 pages, 1 figure, to be published in High Performance Computing for
Imaging Conference, Electronic Imaging (HPCI 2023
Applications in GNSS water vapor tomography
Algebraic reconstruction algorithms are iterative algorithms that are used in many area including medicine, seismology or meteorology. These algorithms are known to be highly computational intensive. This may be especially troublesome for real-time applications or when processed by conventional low-cost personnel computers. One of these real time applications
is the reconstruction of water vapor images from Global Navigation Satellite System (GNSS) observations. The parallelization of algebraic reconstruction algorithms has the potential to diminish signi cantly the required resources permitting to obtain valid solutions in time to be used for nowcasting and forecasting weather models.
The main objective of this dissertation was to present and analyse diverse shared memory
libraries and techniques in CPU and GPU for algebraic reconstruction algorithms. It was concluded that the parallelization compensates over sequential implementations. Overall the GPU implementations were found to be only slightly faster than the CPU implementations, depending on the size of the problem being studied.
A secondary objective was to develop a software to perform the GNSS water vapor reconstruction using the implemented parallel algorithms. This software has been developed with success and diverse tests were made namely with synthetic and real data, the preliminary results shown to be satisfactory.
This dissertation was written in the Space & Earth Geodetic Analysis Laboratory (SEGAL) and was carried out in the framework of the Structure of Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC/CTE-ATM/119922/2010) project funded by FCT.Algoritmos de reconstrução algébrica são algoritmos iterativos que são usados em muitas áreas
incluindo medicina, sismologia ou meteorologia. Estes algoritmos são conhecidos por serem bastante
exigentes computacionalmente. Isto pode ser especialmente complicado para aplicações
de tempo real ou quando processados por computadores pessoais de baixo custo. Uma destas
aplicações de tempo real é a reconstrução de imagens de vapor de água a partir de observações
de sistemas globais de navegação por satélite. A paralelização dos algoritmos de reconstrução
algébrica permite que se reduza significativamente os requisitos computacionais permitindo
obter soluções válidas para previsão meteorológica num curto espaço de tempo.
O principal objectivo desta dissertação é apresentar e analisar diversas bibliotecas e técnicas
multithreading para a reconstrução algébrica em CPU e GPU. Foi concluído que a paralelização
compensa sobre a implementações sequenciais. De um modo geral as implementações GPU
obtiveram resultados relativamente melhores que implementações em CPU, isto dependendo do
tamanho do problema a ser estudado. Um objectivo secundário era desenvolver uma aplicação
que realizasse a reconstrução de imagem de vapor de água através de sistemas globais de
navegação por satélite de uma forma paralela. Este software tem sido desenvolvido com sucesso
e diversos testes foram realizados com dados sintéticos e dados reais, os resultados preliminares
foram satisfatórios.
Esta dissertação foi escrita no Space & Earth Geodetic Analysis Laboratory (SEGAL) e foi realizada de acordo com o projecto Structure 01' Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC / CTE-ATM/ 11992212010) financiado pelo FCT.Fundação para a Ciência e a Tecnologia (FCT
Real-time reconstruction and visualisation towards dynamic feedback control during time-resolved tomography experiments at TOMCAT
Tomographic X-ray microscopy beamlines at synchrotron light sources worldwide have pushed the achievable time-resolution for dynamic 3-dimensional structural investigations down to a fraction of a second, allowing the study of quickly evolving systems. The large data rates involved impose heavy demands on computational resources, making it difficult to readily process and interrogate the resulting volumes. The data acquisition is thus performed essentially blindly. Such a sequential process makes it hard to notice problems with the measurement protocol or sample conditions, potentially rendering the acquired data unusable, and it keeps the user from optimizing the experimental parameters of the imaging task at hand. We present an efficient approach to address this issue based on the real-time reconstruction, visualisation and on-the-fly an
Accelerated iterative image reconstruction for cone-beam computed tomography through Big Data frameworks
One of the latest trends in Computed Tomography (CT) is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artifacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative algorithms to enhance image quality. Most approaches are built around two main operators, backprojection and projection, which are computationally expensive. In this work, we present an implementation of those operators for iterative reconstruction methods exploiting the Big Data paradigm. We define an architecture based on Apache Spark that supports both Graphical Processing Units (GPU) and CPU-based architectures. The aforementioned are parallelized using a partitioning scheme based on the division of the volume and irregular data structures in order to reduce the cost of communication and computation of the final images. Our solution accelerates the execution of the two most computational expensive components with Apache Spark, improving the programming experience of new iterative reconstruction algorithms and the maintainability of the source code increasing the level of abstraction for non-experienced high performance programmers. Through an experimental evaluation, we show that we can obtain results up to 10 faster for projection and 21 faster for backprojection when using a GPU-based cluster compared to a traditional multi-core version. Although a linear speed up was not reached, the proposed approach can be a good alternative for porting previous medical image reconstruction applications already implemented in C/C++ or even with CUDA or OpenCL programming models. Our solution enables the automatic detection of the GPU devices and execution on CPU and GPU tasks at the same time under the same system, using all the available resources.This work was supported by the NIH, United States under Grant R01-HL-098686 and Grant U01 EB018753, the Spanish Ministerio de Economia y Competitividad (projects TEC2013-47270-R, RTC-2014-3028 and TIN2016-79637-P), the Spanish Ministerio de Educacion (grant FPU14/03875), the Spanish Ministerio de Ciencia, Innovacion y Universidades (Instituto de Salud Carlos III, project DTS17/00122; Agencia Estatal de Investigacion, project DPI2016-79075-R-AEI/FEDER, UE), co-funded by European Regional Development Fund (ERDF), ‘‘A way of making Europe’’. The CNIC is supported by the Ministerio de Ciencia, Spain, Innovacion y Universidades, Spain and the Pro CNIC Foundation, Spain, and is a Severo Ochoa Center of Excellence, Spain (SEV-2015-0505). Finally, this research was partially supported by Madrid regional Government, Spain under the grant ’’Convergencia Big data-Hpc: de los sensores a las Aplicaciones. (CABAHLA-CM)’’. Ref: S2018/TCS-4423
Adorym: A multi-platform generic x-ray image reconstruction framework based on automatic differentiation
We describe and demonstrate an optimization-based x-ray image reconstruction
framework called Adorym. Our framework provides a generic forward model,
allowing one code framework to be used for a wide range of imaging methods
ranging from near-field holography to and fly-scan ptychographic tomography. By
using automatic differentiation for optimization, Adorym has the flexibility to
refine experimental parameters including probe positions, multiple hologram
alignment, and object tilts. It is written with strong support for parallel
processing, allowing large datasets to be processed on high-performance
computing systems. We demonstrate its use on several experimental datasets to
show improved image quality through parameter refinement
- …