43 research outputs found

    PC-grade parallel processing and hardware acceleration for large-scale data analysis

    Get PDF
    Arguably, modern graphics processing units (GPU) are the first commodity, and desktop parallel processor. Although GPU programming was originated from the interactive rendering in graphical applications such as computer games, researchers in the field of general purpose computation on GPU (GPGPU) are showing that the power, ubiquity and low cost of GPUs makes them an ideal alternative platform for high-performance computing. This has resulted in the extensive exploration in using the GPU to accelerate general-purpose computations in many engineering and mathematical domains outside of graphics. However, limited to the development complexity caused by the graphics-oriented concepts and development tools for GPU-programming, GPGPU has mainly been discussed in the academic domain so far and has not yet fully fulfilled its promises in the real world. This thesis aims at exploiting GPGPU in the practical engineering domain and presented a novel contribution to GPGPU-driven linear time invariant (LTI) systems that are employed by the signal processing techniques in stylus-based or optical-based surface metrology and data processing. The core contributions that have been achieved in this project can be summarized as follow. Firstly, a thorough survey of the state-of-the-art of GPGPU applications and their development approaches has been carried out in this thesis. In addition, the category of parallel architecture pattern that the GPGPU belongs to has been specified, which formed the foundation of the GPGPU programming framework design in the thesis. Following this specification, a GPGPU programming framework is deduced as a general guideline to the various GPGPU programming models that are applied to a large diversity of algorithms in scientific computing and engineering applications. Considering the evolution of GPU’s hardware architecture, the proposed frameworks cover through the transition of graphics-originated concepts for GPGPU programming based on legacy GPUs and the abstraction of stream processing pattern represented by the compute unified device architecture (CUDA) in which GPU is considered as not only a graphics device but a streaming coprocessor of CPU. Secondly, the proposed GPGPU programming framework are applied to the practical engineering applications, namely, the surface metrological data processing and image processing, to generate the programming models that aim to carry out parallel computing for the corresponding algorithms. The acceleration performance of these models are evaluated in terms of the speed-up factor and the data accuracy, which enabled the generation of quantifiable benchmarks for evaluating consumer-grade parallel processors. It shows that the GPGPU applications outperform the CPU solutions by up to 20 times without significant loss of data accuracy and any noticeable increase in source code complexity, which further validates the effectiveness of the proposed GPGPU general programming framework. Thirdly, this thesis devised methods for carrying out result visualization directly on GPU by storing processed data in local GPU memory through making use of GPU’s rendering device features to achieve realtime interactions. The algorithms employed in this thesis included various filtering techniques, discrete wavelet transform, and the fast Fourier Transform which cover the common operations implemented in most LTI systems in spatial and frequency domains. Considering the employed GPUs’ hardware designs, especially the structure of the rendering pipelines, and the characteristics of the algorithms, the series of proposed GPGPU programming models have proven its feasibility, practicality, and robustness in real engineering applications. The developed GPGPU programming framework as well as the programming models are anticipated to be adaptable for future consumer-level computing devices and other computational demanding applications. In addition, it is envisaged that the devised principles and methods in the framework design are likely to have significant benefits outside the sphere of surface metrology.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Development of temporal phase unwrapping algorithms for depth-resolved measurements using an electronically tuned Ti:Sa laser

    Get PDF
    This thesis is concerned with (a) the development of full-field, multi-axis and phase contrast wavelength scanning interferometer, using an electronically tuned CW Ti:Sa laser for the study of depth resolved measurements in composite materials such as GFRPs and (b) the development of temporal phase unwrapping algorithms for depth re-solved measurements. Item (a) was part of the ultimate goal of successfully extracting the 3-D, depth-resolved, constituent parameters (Young s modulus E, Poisson s ratio v etc.) that define the mechanical behaviour of composite materials like GFRPs. Considering the success of OCT as an imaging modality, a wavelength scanning interferometer (WSI) capable of imaging the intensity AND the phase of the interference signal was proposed as the preferred technique to provide the volumetric displacement/strain fields (Note that displacement/strain fields are analogous to phase fields and thus a phase-contrast interferometer is of particular interest in this case). These would then be passed to the VFM and yield the sought parameters provided the loading scheme is known. As a result, a number of key opto-mechanical hardware was developed. First, a multiple channel (x6) tomographic interferometer realised in a Mach-Zehnder arrangement was built. Each of the three channels would provide the necessary information to extract the three orthogonal displacement/strain components while the other three are complementary and were included in the design in order to maximize the penetration depth (sample illuminated from both sides). Second, a miniature uniaxial (tensile and/or compression) loading machine was designed and built for the introduction of controlled and low magnitude displacements. Last, a rotation stage for the experimental determination of the sensitivity vectors and the re-registration of the volumetric data from the six channels was also designed and built. Unfortunately, due to the critical failure of the Ti:Sa laser data collection using the last two items was not possible. However, preliminary results at a single wavelength suggested that the above items work as expected. Item (b) involved the development of an optical sensor for the dynamic monitoring of wavenumber changes during a full 100 nm scan. The sensor is comprised of a set of four wedges in a Fizeau interferometer setup that became part of the multi-axis interferometer (7th channel). Its development became relevant due to the large amount of mode-hops present during a full scan of the Ti:Sa source. These are associated to the physics of the laser and have the undesirable effect of randomising the signal and thus preventing successful depth reconstructions. The multi-wedge sensor was designed so that it provides simultaneously high wavenumber change resolution and immunity to the large wavenumber jumps from the Ti:Sa. The analysis algorithms for the extraction of the sought wavenumber changes were based on 2-D Fourier transform method followed by temporal phase unwrapping. At first, the performance of the sensor was tested against that of a high-end commercial wavemeter for a limited scan of 1nm. A root mean square (rms) difference in measured wavenumber shift between the two of ∌4 m-1 has been achieved, equivalent to an rms wavelength shift error of ∌0.4 pm. Second, by resampling the interference signal and the wavenumber-change axis onto a uniformly sampled k-space, depth resolutions that are close to the theoretical limits were achieved for scans of up to 37 nm. Access of the full 100 nm range that is characterised by wavelength steps down to picometers level was achieved by introducing a number of improvements to the original temporal phase unwrapping algorithm reported in ref [1] tailored to depth resolved measurements. These involved the estimation and suppression of intensity background artefacts, improvements on the 2-D Fourier transform phase detection based on a previously developed algorithm in ref [2] and finally the introduction of two modifications to the original TPU. Both approaches are adaptive and involve signal re-referencing at regular intervals throughout the scan. Their purpose is to compensate for systematic and non-systematic errors owing to a small error in the value of R (a scaling factor applied to the lower sensitivity wedge phase-change signal used to unwrap the higher sensitivity one), or small changes in R with wavelength due to the possibility of a mismatch in the refractive dispersion curves of the wedges and/or a mismatch in the wedge angles. A hybrid approach combining both methods was proposed and used to analyse the data from each of the four wedges. It was found to give the most robust results of all the techniques considered, with a clear Fourier peak at the expected frequency, with significantly reduced spectral artefacts and identical depth resolutions for all four wedges of 2.2 ÎŒm measured at FWHM. The ability of the phase unwrapping strategy in resolving the aforementioned issues was demonstrated by successfully measuring the absolute thickness of four fused silica glasses using real experimental data. The results were compared with independent micrometer measurements and showed excellent agreement. Finally, due to the lack of additional experimental data and in an attempt to justify the validity of the proposed temporal phase unwrapping strategy termed as the hybrid approach, a set of simulations that closely matched the parameters characterising the real experimental data set analysed were produced and were subsequently analysed. The results of this final test justify that the various fixes included in the hybrid approach have not evolved to solve the problems of a particular data set but are rather of general nature thereby, highlighting its importance for PC-WSI applications concerning the processing and analysis of large scans

    A NEW METHOD OF WAVELENGTH SCANNING INTERFEROMETRY FOR INSPECTING SURFACES WITH MULTI-SIDE HIGH-SLOPED FACETS

    Get PDF
    With the development of modern advanced manufacturing technologies, the requirements for ultra-precision structured surfaces are increasing rapidly for both high value-added products and scientific research. Examples of the components encompassing the structures include brightness enhancement film (BEF), optical gratings and so forth. Besides, specially designed structured surfaces, namely metamaterials can lead to specified desirable coherence, angular or spatial characteristics that the natural materials do not possess. This promising field attracts a large amount of funding and investments. However, owing to a lack of effective means of inspecting the structured surfaces, the manufacturing process is heavily reliant on the experience of fabrication operators adopting an expensive trial-and-error approach, resulting in high scrap rates up to 50-70% of the manufactured items. Therefore, overcoming this challenge becomes increasingly valuable. The thesis proposes a novel methodology to tackle this challenge by setting up an apparatus encompassing multiple measurement probes to attain the dataset for each facet of the structured surface and then blending the acquired datasets together, based on the relative location of the probes, which is achieved via the system calibration. The method relies on wavelength scanning interferometry (WSI), which can achieve areal measurement with axial resolutions approaching the nanometre without the requirement for the mechanical scanning of either the sample or optics, unlike comparable techniques such as coherence scanning interferometry (CSI). This lack of mechanical scanning opens up the possibility of using a multi-probe optics system to provide simultaneous measurement with multi adjacent fields of view. The thesis presents a proof-of-principle demonstration of a dual-probe wavelength scanning interferometry (DPWSI) system capable of measuring near-right-angle V-groove structures in a single measurement acquisition. The optical system comprises dual probes, with orthogonal measurement planes. For a given probe, a range of V-groove angles is measurable, limited by the acceptance angle of the objective lenses employed. This range can be expanded further by designing equivalent probe heads with varying angular separation. More complicated structured surfaces can be inspected by increasing the number of probes. The fringe analysis algorithms for WSI are discussed in detail, some improvements are proposed, and experimental validation is conducted. The scheme for calibrating the DPSWI system and obtaining the relative location between the probes to achieve the whole topography is implemented and presented in full. The appraisal of the DPWSI system is also carried out using a multi-step diamond-turned specimen and a sawtooth brightness enhancement film (BEF). The results showed that the proposed method could achieve the inspection of the near-right-angle V-groove structures with submicrometre scale vertical resolution and micrometre level lateral resolution

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral codierte multispektrale Lichtfelder, wie sie von einer Lichtfeldkamera mit einem spektral codierten Mikrolinsenarray aufgenommen werden, untersucht. FĂŒr die Rekonstruktion der codierten Lichtfelder werden zwei Methoden entwickelt und im Detail ausgewertet. ZunĂ€chst wird eine vollstĂ€ndige Rekonstruktion des spektralen Lichtfelds entwickelt, die auf den Prinzipien des Compressed Sensing basiert. Um die spektralen Lichtfelder spĂ€rlich darzustellen, werden 5D-DCT-Basen sowie ein Ansatz zum Lernen eines Dictionary untersucht. Der konventionelle vektorisierte Dictionary-Lernansatz wird auf eine tensorielle Notation verallgemeinert, um das Lichtfeld-Dictionary tensoriell zu faktorisieren. Aufgrund der reduzierten Anzahl von zu lernenden Parametern ermöglicht dieser Ansatz grĂ¶ĂŸere effektive AtomgrĂ¶ĂŸen. Zweitens wird eine auf Deep Learning basierende Rekonstruktion der spektralen Zentralansicht und der zugehörigen DisparitĂ€tskarte aus dem codierten Lichtfeld entwickelt. Dabei wird die gewĂŒnschte Information direkt aus den codierten Messungen geschĂ€tzt. Es werden verschiedene Strategien des entsprechenden Multi-Task-Trainings verglichen. Um die QualitĂ€t der Rekonstruktion weiter zu verbessern, wird eine neuartige Methode zur Einbeziehung von Hilfslossfunktionen auf der Grundlage ihrer jeweiligen normalisierten GradientenĂ€hnlichkeit entwickelt und gezeigt, dass sie bisherige adaptive Methoden ĂŒbertrifft. Um die verschiedenen RekonstruktionsansĂ€tze zu trainieren und zu bewerten, werden zwei DatensĂ€tze erstellt. ZunĂ€chst wird ein großer synthetischer spektraler Lichtfelddatensatz mit verfĂŒgbarer DisparitĂ€t Ground Truth unter Verwendung eines Raytracers erstellt. Dieser Datensatz, der etwa 100k spektrale Lichtfelder mit dazugehöriger DisparitĂ€t enthĂ€lt, wird in einen Trainings-, Validierungs- und Testdatensatz aufgeteilt. Um die QualitĂ€t weiter zu bewerten, werden sieben handgefertigte Szenen, so genannte Datensatz-Challenges, erstellt. Schließlich wird ein realer spektraler Lichtfelddatensatz mit einer speziell angefertigten spektralen Lichtfeldreferenzkamera aufgenommen. Die radiometrische und geometrische Kalibrierung der Kamera wird im Detail besprochen. Anhand der neuen DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail bewertet. Es werden verschiedene Codierungsmasken untersucht -- zufĂ€llige, regulĂ€re, sowie Ende-zu-Ende optimierte Codierungsmasken, die mit einer neuartigen differenzierbaren fraktalen Generierung erzeugt werden. DarĂŒber hinaus werden weitere Untersuchungen durchgefĂŒhrt, zum Beispiel bezĂŒglich der AbhĂ€ngigkeit von Rauschen, der Winkelauflösung oder Tiefe. Insgesamt sind die Ergebnisse ĂŒberzeugend und zeigen eine hohe RekonstruktionsqualitĂ€t. Die Deep-Learning-basierte Rekonstruktion, insbesondere wenn sie mit adaptiven Multitasking- und Hilfslossstrategien trainiert wird, ĂŒbertrifft die Compressed-Sensing-basierte Rekonstruktion mit anschließender DisparitĂ€tsschĂ€tzung nach dem Stand der Technik

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral kodierte multispektrale Lichtfelder untersucht, wie sie von einer Lichtfeldkamera mit einem spektral kodierten Mikrolinsenarray aufgenommen werden. FĂŒr die Rekonstruktion der kodierten Lichtfelder werden zwei Methoden entwickelt, eine basierend auf den Prinzipien des Compressed Sensing sowie eine Deep Learning Methode. Anhand neuartiger synthetischer und realer DatensĂ€tze werden die vorgeschlagenen RekonstruktionsansĂ€tze im Detail evaluiert

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In this work, spatio-spectrally coded multispectral light fields, as taken by a light field camera with a spectrally coded microlens array, are investigated. For the reconstruction of the coded light fields, two methods, one based on the principles of compressed sensing and one deep learning approach, are developed. Using novel synthetic as well as a real-world datasets, the proposed reconstruction approaches are evaluated in detail

    Multi-Modality Diffuse Fluorescence Imaging Applied to Preclinical Imaging in Mice

    Get PDF
    RÉSUMÉ Cette thĂšse vise Ă  explorer l'information anatomique et fonctionnelle en dĂ©veloppant de nouveaux systĂšmes d'imagerie de fluorescence macroscopiques Ă  base de multi-modalitĂ©. L‘ajout de l‘imagerie anatomique Ă  des modalitĂ©s fonctionnelles telles que la fluorescence permet une meilleure visualisation et la rĂ©cupĂ©ration quantitative des images de fluorescence, ce qui en retour permet d'amĂ©liorer le suivi et l'Ă©valuation des paramĂštres biologiques dans les tissus. Sur la base de cette motivation, la fluorescence a Ă©tĂ© combinĂ©e avec l‘imagerie ultrasonore (US) d'abord et ensuite l'imagerie par rĂ©sonance magnĂ©tique (IRM). Dans les deux cas, les performances du systĂšme ont Ă©tĂ© caractĂ©risĂ©es et la reconstruction a Ă©tĂ© Ă©valuĂ©e par des simulations et des expĂ©rimentations sur des fantĂŽmes. Finalement, ils ont Ă©tĂ© utilisĂ©s pour des expĂ©riences d'imagerie molĂ©culaire in vivo dans des modĂšles de cancer et d‘athĂ©rosclĂ©rose chez la souris. Les rĂ©sultats ont Ă©tĂ© prĂ©sentĂ©s dans trois articles, qui sont inclus dans cette thĂšse et dĂ©crits briĂšvement ci-dessous. Un premier article prĂ©sente un systĂšme d'imagerie bimodalitĂ© combinant fluorescence Ă  onde continue avec l‘imagerie Ă  trois dimensions (3D) US. A l‘aide de stages X-Y motorisĂ©s, le systĂšme d'imagerie a Ă©tĂ© en mesure de recueillir lâ€˜Ă©mission fluorescente et les Ă©chos acoustiques dĂ©limitant la surface 3D et la position des inclusions fluorescentes dans l'Ă©chantillon. Une validation sur fantĂŽmes, a montrĂ© que l'utilisation des priors anatomiques provenant des US amĂ©liorait la qualitĂ© de la reconstruction fluorescente. En outre, un Ă©tude pilote in-vivo en utilisant une souris Apo-E a Ă©valuĂ© la faisabilitĂ© de cette approche d'imagerie double modalitĂ© pour de futures Ă©tudes prĂ©-cliniques. Dans un deuxiĂšme effort, et sur la base du premier travail, nous avons amĂ©liorĂ© le systĂšme d'imagerie par fluorescence-US au niveau des algorithmes, de la prĂ©cision----------ABSTRACT This thesis aims to explore the anatomical and functional information by developing new macroscopic multi-modality fluorescence imaging schemes. Adding anatomical imaging to functional modalities such as fluorescence enables better visualization and recovery of fluorescence images, in turn, improving the monitoring and assessment of biological parameters in tissue. Based on this motivation, fluorescence was combined with ultrasound (US) imaging first and then magnetic resonance imaging (MRI). In both cases, the systems characterization and reconstruction performance were evaluated by simulations and phantom experiments. Eventually, they were applied to in vivo molecular imaging in models of cancer and atherosclerosis in mice. Results were presented in three peer-reviewed journals, which are included in this thesis and shortly described below. A first article presented a dual-modality imaging system combining continuous-wave transmission fluorescence imaging with three dimensional (3D) US imaging. Using motorized X-Y stages, the fluorescence-US imaging system was able to collect boundary fluorescent emission, and acoustic pulse-echoes delineating the 3D surface and position of fluorescent inclusions within the sample. A validation in phantoms showed that using the US anatomical priors, the fluorescent reconstruction quality was significantly improved. Furthermore, a pilot in-vivo study using an Apo-E mouse evaluated the feasibility of this dual-modality imaging approach for future animal studies. In a second endeavor, and based on the first work, we improved the fluorescence-US imaging system in terms of sampling precision and reconstruction algorithms. Specifically, now combining US imaging and profilometry, both the fluorescent target and 3D surface of sample could be obtained in order to achieve improved fluorescence reconstruction. Furthermore,
    corecore