19 research outputs found

    PC-grade parallel processing and hardware acceleration for large-scale data analysis

    Get PDF
    Arguably, modern graphics processing units (GPU) are the first commodity, and desktop parallel processor. Although GPU programming was originated from the interactive rendering in graphical applications such as computer games, researchers in the field of general purpose computation on GPU (GPGPU) are showing that the power, ubiquity and low cost of GPUs makes them an ideal alternative platform for high-performance computing. This has resulted in the extensive exploration in using the GPU to accelerate general-purpose computations in many engineering and mathematical domains outside of graphics. However, limited to the development complexity caused by the graphics-oriented concepts and development tools for GPU-programming, GPGPU has mainly been discussed in the academic domain so far and has not yet fully fulfilled its promises in the real world. This thesis aims at exploiting GPGPU in the practical engineering domain and presented a novel contribution to GPGPU-driven linear time invariant (LTI) systems that are employed by the signal processing techniques in stylus-based or optical-based surface metrology and data processing. The core contributions that have been achieved in this project can be summarized as follow. Firstly, a thorough survey of the state-of-the-art of GPGPU applications and their development approaches has been carried out in this thesis. In addition, the category of parallel architecture pattern that the GPGPU belongs to has been specified, which formed the foundation of the GPGPU programming framework design in the thesis. Following this specification, a GPGPU programming framework is deduced as a general guideline to the various GPGPU programming models that are applied to a large diversity of algorithms in scientific computing and engineering applications. Considering the evolution of GPU’s hardware architecture, the proposed frameworks cover through the transition of graphics-originated concepts for GPGPU programming based on legacy GPUs and the abstraction of stream processing pattern represented by the compute unified device architecture (CUDA) in which GPU is considered as not only a graphics device but a streaming coprocessor of CPU. Secondly, the proposed GPGPU programming framework are applied to the practical engineering applications, namely, the surface metrological data processing and image processing, to generate the programming models that aim to carry out parallel computing for the corresponding algorithms. The acceleration performance of these models are evaluated in terms of the speed-up factor and the data accuracy, which enabled the generation of quantifiable benchmarks for evaluating consumer-grade parallel processors. It shows that the GPGPU applications outperform the CPU solutions by up to 20 times without significant loss of data accuracy and any noticeable increase in source code complexity, which further validates the effectiveness of the proposed GPGPU general programming framework. Thirdly, this thesis devised methods for carrying out result visualization directly on GPU by storing processed data in local GPU memory through making use of GPU’s rendering device features to achieve realtime interactions. The algorithms employed in this thesis included various filtering techniques, discrete wavelet transform, and the fast Fourier Transform which cover the common operations implemented in most LTI systems in spatial and frequency domains. Considering the employed GPUs’ hardware designs, especially the structure of the rendering pipelines, and the characteristics of the algorithms, the series of proposed GPGPU programming models have proven its feasibility, practicality, and robustness in real engineering applications. The developed GPGPU programming framework as well as the programming models are anticipated to be adaptable for future consumer-level computing devices and other computational demanding applications. In addition, it is envisaged that the devised principles and methods in the framework design are likely to have significant benefits outside the sphere of surface metrology.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Gaussian process machine learning-based surface extrapolation method for improvement of the edge effect in surface filtering

    Get PDF
    Filtering for signal and data is an important technology to reduce and/or remove noise signal for further extraction of desired information. However, it is well known that significant distortions may occur in the boundary areas of the filtered data because there is no sufficient data to be processed. This drawback largely affects the accuracy of topographic measurements and characterizations of precision freeform surfaces, such as freeform optics. To address this issue, a Gaussian process machine learning-based method is presented for extrapolation of the measured surface to an extended measurement area with high accuracy prior to filtering the surface. With the extrapolated data, the edge distortion can be effectively reduced. The effectiveness of this method was evaluated using both simulated and experimental data. Successful implementation of the proposed method not only addresses the issue in surface filtering but also provides a promising solution for numerous applications involving filtering processes

    A study of extrapolation of freeform surfaces to improve the edge effect in surface filtering

    Get PDF
    Surface filtering is a hot research topic especially in the field of freeform surface metrology, since filtering is an important data processing technique before further characterization of the measured surfaces. There is a large number of surface filtering algorithms developed by researchers to improve the robustness and accuracy of the filtering results. However, the filtering result is still far from complete which is particularly true in the edge area where is always found to have large distortion. This is so-called the edge effect which is mainly caused by a lack of data when performing convolution in the edge area in the filtering algorithms. In this paper, a Gaussian process machine learning-based surface extrapolation method of the measurement data is presented to extend the measured surface before conducting surface filtering. A Gaussian process data modelling method is utilized for the surface extrapolation and hence a Gaussian filtering method is used for the surface filtering. A series of simulation and practical measurement experiments have been conducted to evaluate the performance of the proposed method. The accuracy and efficiency of the new filtering method are demonstrated and analyzed in the experiments. The results show that the edge effect can be significantly improved and the efficiency can also be improved by introducing the extrapolation method. The proposed method provides a new way for surface filtering and thus surface characterization for the complex freeform surfaces

    A fuzzy neural network based dynamic data allocation model on heterogeneous multi-GPUs for large-scale computations

    Get PDF
    The parallel computation capabilities of modern GPU (Graphics Processing Unit) processors have attracted increasing attention from researchers and engineers who have been conducting high computational throughput studies. However, current single GPU based engineering solutions are often struggle to fulfill their real-time requirements. Thus, the multi-GPU-based approach has become a popular and cost-effective choice for tackling the demands. In those cases, the computational load balancing over multiple GPU “nodes” is often the key and bottleneck that affect the quality and performance of the runtime system. The existing load balancing approaches are mainly based on the assumption that all GPU nodes in the same computer framework are of equal computational performance, which are often not the case due to cluster design and other legacy issues. This paper presents a novel dynamic load balancing (DLB) model for rapid data division and allocation on heterogeneous GPU nodes based on an innovative fuzzy neural network (FNN). In this research, a 5-state parameter feedback mechanism defining the overall cluster and node performances is proposed. The corresponding FNN-based DLB model will be capable of monitoring and predicting individual node performance under different workload scenarios. A real-time adaptive scheduler has been devised to reorganize the data inputs to each node when necessary to maintain their runtime computational performances. The devised model has been implemented on two dimensional (2D) discrete wavelet transform (DWT) tasks for evaluation. Experiment results show that this DLB model has enabled a high computational throughput while ensuring real-time and precision requirements from complex computational tasks

    Real Time Structured Light and Applications

    Get PDF

    Raspberry Pi Technology

    Get PDF

    Advanced Knowledge Application in Practice

    Get PDF
    The integration and interdependency of the world economy leads towards the creation of a global market that offers more opportunities, but is also more complex and competitive than ever before. Therefore widespread research activity is necessary if one is to remain successful on the market. This book is the result of research and development activities from a number of researchers worldwide, covering concrete fields of research

    Modern Applications in Optics and Photonics: From Sensing and Analytics to Communication

    Get PDF
    Optics and photonics are among the key technologies of the 21st century, and offer potential for novel applications in areas such as sensing and spectroscopy, analytics, monitoring, biomedical imaging/diagnostics, and optical communication technology. The high degree of control over light fields, together with the capabilities of modern processing and integration technology, enables new optical measurement systems with enhanced functionality and sensitivity. They are attractive for a range of applications that were previously inaccessible. This Special Issue aims to provide an overview of some of the most advanced application areas in optics and photonics and indicate the broad potential for the future
    corecore