383 research outputs found

    Fast algorithm for real-time rings reconstruction

    Get PDF
    The GAP project is dedicated to study the application of GPU in several contexts in which real-time response is important to take decisions. The definition of real-time depends on the application under study, ranging from answer time of μs up to several hours in case of very computing intensive task. During this conference we presented our work in low level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6]. Apart from the study of dedicated solution to decrease the latency due to data transport and preparation, the computing algorithms play an essential role in any GPU application. In this contribution, we show an original algorithm developed for triggers application, to accelerate the ring reconstruction in RICH detector when it is not possible to have seeds for reconstruction from external trackers

    In-situ synchrotron X-ray imaging and tomography studies of the evolution of solidification microstructures under pulse electromagnetic fields

    Get PDF
    This research studies the dynamic evolution of dendritic structures and intermetallic phases of four Al based alloys during the solidification under pulse electromagnetic fields (PMFs). An advanced PMF solidification device was upgraded, built, commissioned for the research. The alloys used were Al-15Cu, Al-35Cu, Al-15Ni and Al-5Cu-1.5Fe-1Si. Systematic in-situ and real-time observation and studies were carried out at the TOMCAT beamline of Swiss Light Source, I13-2 beamline of Diamond Light Source and ID19 beamline of European Synchrotron Radiation Facility in the duration of this project. Synchrotron X-ray radiography and tomography were used primarily to observe and study the influence of PMFs on the nucleation and growth of primary dendritic structures and intermetallic phases under different magnetic flux and solidification conditions for the four alloys. More than 20 TB images and tomography datasets have been obtained throughout this research. Much effort and time was spent on segmenting, visualising and analysing these huge datasets using the Hull University supercomputer cluster, Viper, and the software, Avizo, ImageJ (Fiji), etc to explore and extract new insights and new science from those datasets. In particular, the skeletonisation function available from Avizo was customised and used to quantify the complex 3D microstructures and interconnected networks of different phases for the alloys. The important new findings of the research are:(1) Fragmentation of primary Al dendrites in the Al-15%Cu alloy was found when the magnetic flux of PMF applied is above 0.75 T; similarly, the fragmentation of Al3Ni intermetallic phases in the Al-15%Ni alloy was also observed when the magnetic flux of PMF applied is above 0.8 T. The clear and real-time observation of the fragmentation events in both dendritic and intermetallic phases provide unambiguous evidence to demonstrate that PMFs play a dominant role in structure fragmentation and multiplication, which is one important mechanism for structure (grain) refinement.(2) PMFs also produces pinch pressure gradient inside the semi-solid melt. Due to the different magnetic anisotropic properties between the liquid and solid phases, shear stresses due to the pinch pressure gradient may be produced. In the case of Al-15%Ni alloy, shear stresses of up to 30 MPa is created, which is sufficient to fracture Al3Ni phases. For the first time, such fragmentation mechanism for the Al3Ni phases in the Al-15%Ni alloy was revealed in this research.(3) The transition (or change of growth modes) of Al columnar dendrites to seaweed type dendrites in Al-15Cu alloy; and the facet growth to dendritic growth of the Al3Ni phases in the Al-15%Ni alloy were also observed in real-time when the magnetic flux is in the range of 0.75~0.8 T. Again, such dynamic changes in structure growth under PMFs are due to the enhanced melt flow caused by the applied fields.(4) In-situ tomography observation of PMF processing of the Al-5Cu-1.5Fe-1Si alloy also shows the effect of PMF on the refinement of the Chinese script type Fe intermetallic phases. In addition, the true 3D morphologies of three different types of Fe intermetallic phases in this alloy were clarified, again for the first time, in this research

    Applications in GNSS water vapor tomography

    Get PDF
    Algebraic reconstruction algorithms are iterative algorithms that are used in many area including medicine, seismology or meteorology. These algorithms are known to be highly computational intensive. This may be especially troublesome for real-time applications or when processed by conventional low-cost personnel computers. One of these real time applications is the reconstruction of water vapor images from Global Navigation Satellite System (GNSS) observations. The parallelization of algebraic reconstruction algorithms has the potential to diminish signi cantly the required resources permitting to obtain valid solutions in time to be used for nowcasting and forecasting weather models. The main objective of this dissertation was to present and analyse diverse shared memory libraries and techniques in CPU and GPU for algebraic reconstruction algorithms. It was concluded that the parallelization compensates over sequential implementations. Overall the GPU implementations were found to be only slightly faster than the CPU implementations, depending on the size of the problem being studied. A secondary objective was to develop a software to perform the GNSS water vapor reconstruction using the implemented parallel algorithms. This software has been developed with success and diverse tests were made namely with synthetic and real data, the preliminary results shown to be satisfactory. This dissertation was written in the Space & Earth Geodetic Analysis Laboratory (SEGAL) and was carried out in the framework of the Structure of Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC/CTE-ATM/119922/2010) project funded by FCT.Algoritmos de reconstrução algébrica são algoritmos iterativos que são usados em muitas áreas incluindo medicina, sismologia ou meteorologia. Estes algoritmos são conhecidos por serem bastante exigentes computacionalmente. Isto pode ser especialmente complicado para aplicações de tempo real ou quando processados por computadores pessoais de baixo custo. Uma destas aplicações de tempo real é a reconstrução de imagens de vapor de água a partir de observações de sistemas globais de navegação por satélite. A paralelização dos algoritmos de reconstrução algébrica permite que se reduza significativamente os requisitos computacionais permitindo obter soluções válidas para previsão meteorológica num curto espaço de tempo. O principal objectivo desta dissertação é apresentar e analisar diversas bibliotecas e técnicas multithreading para a reconstrução algébrica em CPU e GPU. Foi concluído que a paralelização compensa sobre a implementações sequenciais. De um modo geral as implementações GPU obtiveram resultados relativamente melhores que implementações em CPU, isto dependendo do tamanho do problema a ser estudado. Um objectivo secundário era desenvolver uma aplicação que realizasse a reconstrução de imagem de vapor de água através de sistemas globais de navegação por satélite de uma forma paralela. Este software tem sido desenvolvido com sucesso e diversos testes foram realizados com dados sintéticos e dados reais, os resultados preliminares foram satisfatórios. Esta dissertação foi escrita no Space & Earth Geodetic Analysis Laboratory (SEGAL) e foi realizada de acordo com o projecto Structure 01' Moist convection in high-resolution GNSS observations and models (SMOG) (PTDC / CTE-ATM/ 11992212010) financiado pelo FCT.Fundação para a Ciência e a Tecnologia (FCT

    GPGPU application in fusion science

    Get PDF
    GPGPUs have firmly earned their reputation in HPC (High Performance Computing) as hardware for massively parallel computation. However their application in fusion science is quite marginal and not considered a mainstream approach to numerical problems. Computation advances have increased immensely over the last decade and continue to accelerate. GPGPU boards were always an alternative and exotic approach to problem solving and scientific programming, which was cultivated only by enthusiasts and specialized programmers. Today it is about 10 years, since the first fully programmable GPUs appeared on the market. And due to exponential growth in processing power over the years GPGPUs are not the alternative choice any more, but they became the main choice for big problem solving. Originally developed for and dominating in fields such as image and media processing, image rendering, video encoding/decoding, image scaling, stereo vision and pattern recognition GPGPUs are also becoming mainstream computation platforms in scientific fields such as signal processing, physics, finance and biology. This PhD contains solutions and approaches to two relevant problems for fusion and plasma science using GPGPU processing. First problem belongs to the realms of plasma and accelerator physics. I will present number of plasma simulations built on a PIC (Particle In Cell) method such as plasma sheath simulation, electron beam simulation, negative ion beam simulation and space charge compensation simulation. Second problem belongs to the realms of tomography and real-time control. I will present number of simulated tomographic plasma reconstructions of Fourier-Bessel type and their analysis all in real-time oriented approach, i.e. GPGPU based implementations are integrated into MARTe environment. MARTe is a framework for real-time application developed at JET (Joint European Torus) and used in several european fusion labs. These two sets of problems represent a complete spectrum of GPGPU operation capabilities. PIC based problems are large complex simulations operated as batch processes, which do not have a time constraint and operate on huge amounts of memory. While tomographic plasma reconstructions are online (realtime) processes, which have a strict latency/time constraints suggested by the time scales of real-time control and operate on relatively small amounts of memory. Such a variety of problems covers a very broad range of disciplines and fields of science: such as plasma physics, NBI (Neutral Beam Injector) physics, tokamak physics, parallel computing, iterative/direct matrix solvers, PIC method, tomography and so on. PhD thesis also includes an extended performance analysis of Nvidia GPU cards considering the applicability to the real-time control and real-time performance. In order to approach the aforementioned problems I as a PhD candidate had to gain knowledge in those relevant fields and build a vast range of practical skills such as: parallel/sequential CPU programming, GPU programming, MARTe programming, MatLab programming, IDL programming and Python programming

    Image-based Control and Automation of High-speed X-ray Imaging Experiments

    Get PDF
    Moderne Röntgenbildgebung gibt Aufschluss über die innere Struktur von Objekten aus den verschiedensten Materialien. Der Erfolg solcher Messungen hängt dabei entscheidend von einer geeigneten Wahl der Aufnahmebedingungen ab, von der mechanischen Instrumentierung und von den Eigenschaften der Probe oder des untersuchten Prozesses selbst. Bisher gibt es kein bekanntes Verfahren für autonome Datenakquise, welches auch für sehr verschiedene Röntgenbildgebungsexperimenten die Steuerung über bildbasiertes Feedback erlaubt. Die vorliegende Arbeit setzt sich als Ziel, diese Lücke zu schließen, indem gezielt die hierbei auftretenden Probleme angegangen und gelöst werden: die Auswahl der experimentellen Startparameter, eine schnelle Verarbeitung der aufgenommenen Daten und ein automatisches Feedback zur Korrektur der laufenden Messprozedur. Um die am besten geeigneten experimentellen Bedingungen zu bestimmen, gehen wir von den Grundlagen der Bildentstehung aus und entwickeln ein Framework für dessen Simulation. Dieses ermöglicht uns eine große Bandbreite an virtuellen Röntgenbildgebungsexperimenten durchzuführen, wobei die entscheidenden physikalischen Prozesse auf dem Weg der Röntgenstrahlung von der Quelle bis zum Detektor berücksichtigt werden. Darüber hinaus betrachten wir verschiedene Probenformen und bewegungen, was uns die Simulation von Experimenten wie etwa 4D (zeitaufgelöster) Tomographie ermöglicht. Außerdem entwickeln wir eine autonome Prozedur für die Datenakquise, welches die Startbedingungen des Versuchs dann während der schon laufenden Messung auf Basis schneller Bildanalyse das nachjustiert und auch andere Parameter des Experiments steuern kann. Besonderes Augenmerk legen wir hier auf Hochgeschwindigkeitsexperimente, welche hohen Anforderungen an die Geschwindigkeit der Datenverarbeitung stellen, vor allem wenn die Steuerung auf rechenintensiven Algorithmen wie etwa für die tomographische 3D Rekonstruktion der Probe basiert. Um hierzu einen effizienten Algorithmus zu implementieren, verwenden wir ein hochgradig parallelisiertes Framework. Dessen Ausgabe kann dann zur Berechnung verschiedener Bildmetriken verwendet werden, um quantitative Information über die aufgenommenen Daten zu erhalten. Diese bilden die Grundlage zur Entscheidungsfindung in einem geschlossenen Regelkreis, in dem die Hardware für die Datenakquise betrieben wird. Die Genauigkeit des entwickelten Simulationsframeworks zeigen wir, indem wir virtuelle und reale Experimente vergleichen, die auf Gitterinterferometrie basieren und damit spezielle optische Elemente für die Kontrastbildung einsetzen. Außerdem untersuchen wir im Detail den Einfluss der Bildgebungsbedingungen auf die Genauigkeit des implementierten Algorithmus für gefilterte Rückprojektion, und inwiefern unter dessen Berücksichtigung eine Optimierung der experimentellen Bedingungen möglich ist. Wir demonstrieren die Fähigkeiten des von uns entwickelten Systems zur autonomen Datenakquise anhand eines in-situ Tomographieexperiments, bei dem es basierend auf 3D-Rekonstruktion die Framerate der Kamera optimiert und damit sicherstellt, dass die aufgezeichneten Datensätze ohne Artefakte rekonstruiert werden können. Außerdem nutzen wir unser System, um ein Tomographieexperiment mit hohem Probendurchsatz durchzuführen, bei dem viele ähnliche biologische Proben gescannt werde: Für jede davon wird automatisch die tomographische Rotationsachse bestimmt und schließlich zur Sicherstellung der Qualität schon während der Messung ein komplettes 3D Volumen rekonstruiert. Darüber hinaus führen wir ein in-situ Laminographieexperiment durch, welches die Rissbildung in einer Materialprobe untersucht. Hierbei führt unser System die Datenakquise durch und rekonstruiert einen zentral gelegenen Querschnitt durch die Probe, um dessen korrekte Ausrichtung und die Qualität der Daten sicherzustellen. Unsere Arbeit ermöglicht - basierend auf hochgenauen Simulationen - die Wahl der am besten geeigneten Startbedingungen eines Experiments, deren Feinabstimmung während eines realen Experiments und schließlich dessen automatische Steuerung basierend auf schneller Analyse der gerade aufgezeichneten Daten. Ein solches Vorgehen bei der Datenakquise ermöglicht neuartige in-vivo und in-situ Hochgeschwindigkeitsexperimente, die bedingt durch die hohen Datenraten nicht mehr von einer menschlichen Bedienperson gehandhabt werden könnten

    Computational Methods and Graphical Processing Units for Real-time Control of Tomographic Adaptive Optics on Extremely Large Telescopes.

    Get PDF
    Ground based optical telescopes suffer from limited imaging resolution as a result of the effects of atmospheric turbulence on the incoming light. Adaptive optics technology has so far been very successful in correcting these effects, providing nearly diffraction limited images. Extremely Large Telescopes will require more complex Adaptive Optics configurations that introduce the need for new mathematical models and optimal solvers. In addition, the amount of data to be processed in real time is also greatly increased, making the use of conventional computational methods and hardware inefficient, which motivates the study of advanced computational algorithms, and implementations on parallel processors. Graphical Processing Units (GPUs) are massively parallel processors that have so far demonstrated a very high increase in speed compared to CPUs and other devices, and they have a high potential to meet the real-time restrictions of adaptive optics systems. This thesis focuses on the study and evaluation of existing proposed computational algorithms with respect to computational performance, and their implementation on GPUs. Two basic methods, one direct and one iterative are implemented and tested and the results presented provide an evaluation of the basic concept upon which other algorithms are based, and demonstrate the benefits of using GPUs for adaptive optics

    Tomographic measurement of all orthogonal components of three-dimensional displacement fields within scattering materials using wavelength scanning interferometry

    Get PDF
    Experimental mechanics is currently contemplating tremendous opportunities of further advancements thanks to a combination of powerful computational techniques and also fullfield non-contact methods to measure displacement and strain fields in a wide variety of materials. Identification techniques, aimed to evaluate material mechanical properties given known loads and measured displacement or strain fields, are bound to benefit from increased data availability (both in density and dimensionality) and efficient inversion methods such as finite element updating (FEU) and the virtual fields method (VFM). They work at their best when provided with dense and multicomponent experimental displacement (or strain) data, i.e. when all orthogonal components of displacements (or all components of the strain tensor) are known at points closely spaced within the volume of the material under study. Although a very challenging requirement, an increasing number of techniques are emerging to provide such data. In this Thesis, a novel wavelength scanning interferometry (WSI) system that provides three dimensional (3-D) displacement fields inside the volume of semi-transparent scattering materials is proposed. Sequences of two-dimensional interferograms are recorded whilst tuning the frequency of a laser at a constant rate. A new approach based on frequency multiplexing is used to encode the interference signal corresponding to multiple illumination directions at different spectral bands. Different optical paths along each illumination direction ensure that the signals corresponding to each sensitivity vector do not overlap in the frequency domain. All the information required to reconstruct the location and the 3-D displacement vector of scattering points within the material is thus recorded simultaneously in a single wavelength scan. By comparing phase data volumes obtained for two successive scans, all orthogonal components of the three dimensional displacement field introduced between scans (e.g. by means of loading or moving the sample under study) are readily obtained with high displacement sensitivity. The fundamental principle that describes the technique is presented in detail, including the correspondence between interference signal frequency and its associated depth within the sample, depth range, depth resolution, transverse resolution and displacement sensitivity. Data processing of the interference signal includes Fourier transformation, noise reduction, re-registration of data volumes, measurement of the illumination and sensitivity vectors from experimental data using a datum surface, phase difference evaluation, 3-D phase unwrapping and 3-D displacement field evaluation. Experiments consisting of controlled rigid body rotations and translations of a phantom were performed to validate the results. Both in-plane and the out-of-plane displacement components were measured for each voxel in the resulting data volume, showing an excellent agreement with the expected 3-D displacement
    corecore