70 research outputs found

    Radial Basis Functions: Biomedical Applications and Parallelization

    Get PDF
    Radial basis function (RBF) is a real-valued function whose values depend only on the distances between an interpolation point and a set of user-specified points called centers. RBF interpolation is one of the primary methods to reconstruct functions from multi-dimensional scattered data. Its abilities to generalize arbitrary space dimensions and to provide spectral accuracy have made it particularly popular in different application areas, including but not limited to: finding numerical solutions of partial differential equations (PDEs), image processing, computer vision and graphics, deep learning and neural networks, etc. The present thesis discusses three applications of RBF interpolation in biomedical engineering areas: (1) Calcium dynamics modeling, in which we numerically solve a set of PDEs by using meshless numerical methods and RBF-based interpolation techniques; (2) Image restoration and transformation, where an image is restored from its triangular mesh representation or transformed under translation, rotation, and scaling, etc. from its original form; (3) Porous structure design, in which the RBF interpolation used to reconstruct a 3D volume containing porous structures from a set of regularly or randomly placed points inside a user-provided surface shape. All these three applications have been investigated and their effectiveness has been supported with numerous experimental results. In particular, we innovatively utilize anisotropic distance metrics to define the distance in RBF interpolation and apply them to the aforementioned second and third applications, which show significant improvement in preserving image features or capturing connected porous structures over the isotropic distance-based RBF method. Beside the algorithm designs and their applications in biomedical areas, we also explore several common parallelization techniques (including OpenMP and CUDA-based GPU programming) to accelerate the performance of the present algorithms. In particular, we analyze how parallel programming can help RBF interpolation to speed up the meshless PDE solver as well as image processing. While RBF has been widely used in various science and engineering fields, the current thesis is expected to trigger some more interest from computational scientists or students into this fast-growing area and specifically apply these techniques to biomedical problems such as the ones investigated in the present work

    Diffeomorphic Transformations for Time Series Analysis: An Efficient Approach to Nonlinear Warping

    Full text link
    The proliferation and ubiquity of temporal data across many disciplines has sparked interest for similarity, classification and clustering methods specifically designed to handle time series data. A core issue when dealing with time series is determining their pairwise similarity, i.e., the degree to which a given time series resembles another. Traditional distance measures such as the Euclidean are not well-suited due to the time-dependent nature of the data. Elastic metrics such as dynamic time warping (DTW) offer a promising approach, but are limited by their computational complexity, non-differentiability and sensitivity to noise and outliers. This thesis proposes novel elastic alignment methods that use parametric \& diffeomorphic warping transformations as a means of overcoming the shortcomings of DTW-based metrics. The proposed method is differentiable \& invertible, well-suited for deep learning architectures, robust to noise and outliers, computationally efficient, and is expressive and flexible enough to capture complex patterns. Furthermore, a closed-form solution was developed for the gradient of these diffeomorphic transformations, which allows an efficient search in the parameter space, leading to better solutions at convergence. Leveraging the benefits of these closed-form diffeomorphic transformations, this thesis proposes a suite of advancements that include: (a) an enhanced temporal transformer network for time series alignment and averaging, (b) a deep-learning based time series classification model to simultaneously align and classify signals with high accuracy, (c) an incremental time series clustering algorithm that is warping-invariant, scalable and can operate under limited computational and time resources, and finally, (d) a normalizing flow model that enhances the flexibility of affine transformations in coupling and autoregressive layers.Comment: PhD Thesis, defended at the University of Navarra on July 17, 2023. 277 pages, 8 chapters, 1 appendi

    Towards Efficient Risk Quantification - Using GPUs and Variance Reduction Technique

    Get PDF
    International audienceValue-at-Risk (VaR) provides information about global risk in trading. The request for high speed calculation about VaR is rising because financial institutions need to measure the risk in real time. Researchers in HPC also recently turned their attention on this kind of demanding applications. In this master thesis, we introduce two complementary and different strategies to improve VaR calculation: one is directly coming from financial mathematics, the other pertains to take advantage of high performance recently available computing devices: GPUs. Our aim is to study the potential of these two approaches on well chosen examples in order to evaluate how much computing time we can spare. Eventually, we discuss alternate approaches worth to be studied in future works.Value-at-Risk (VaR) nous donne des renseignements sur le risque total dans le commerce lorsque nous devons faire la gestion des risques. La demande du calcul rapide de la VaR se développe parce que les etablissements financiers et les entreprises veulent mesurer le risque en temps réel; et depuis récemment de nombreux chercheurs explorent le potentiel du calcul à haute performance pour le faire. Nous introduisons deux possibilités provenant de mathématiques financières et de calcul sur GPU pour faire face à ce problème. Nous l'avons également mis en oeuvre avec des exemples afin de comparer les résultats, pour voir combien d'accélération nous pouvons gagner. Enfin, nous discutons d'autres approches qui peuvent être les futurs travaux

    Efficient architectures of heterogeneous fpga-gpu for 3-d medical image compression

    Get PDF
    The advent of development in three-dimensional (3-D) imaging modalities have generated a massive amount of volumetric data in 3-D images such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US). Existing survey reveals the presence of a huge gap for further research in exploiting reconfigurable computing for 3-D medical image compression. This research proposes an FPGA based co-processing solution to accelerate the mentioned medical imaging system. The HWT block implemented on the sbRIO-9632 FPGA board is Spartan 3 (XC3S2000) chip prototyping board. Analysis and performance evaluation of the 3-D images were been conducted. Furthermore, a novel architecture of context-based adaptive binary arithmetic coder (CABAC) is the advanced entropy coding tool employed by main and higher profiles of H.264/AVC. This research focuses on GPU implementation of CABAC and comparative study of discrete wavelet transform (DWT) and without DWT for 3-D medical image compression systems. Implementation results on MRI and CT images, showing GPU significantly outperforming single-threaded CPU implementation. Overall, CT and MRI modalities with DWT outperform in term of compression ratio, peak signal to noise ratio (PSNR) and latency compared with images without DWT process. For heterogeneous computing, MRI images with various sizes and format, such as JPEG and DICOM was implemented. Evaluation results are shown for each memory iteration, transfer sizes from GPU to CPU consuming more bandwidth or throughput. For size 786, 486 bytes JPEG format, both directions consumed bandwidth tend to balance. Bandwidth is relative to the transfer size, the larger sizing will take more latency and throughput. Next, OpenCL implementation for concurrent task via dedicated FPGA. Finding from implementation reveals, OpenCL on batch procession mode with AOC techniques offers substantial results where the amount of logic, area, register and memory increased proportionally to the number of batch. It is because of the kernel will copy the kernel block refer to batch number. Therefore memory bank increased periodically related to kernel block. It was found through comparative study that the tree balance and unroll loop architecture provides better achievement, in term of local memory, latency and throughput

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application

    Probabilistic data-driven methods for forecasting, identification and control

    Get PDF
    This dissertation presents contributions mainly in three different fields: system identification, probabilistic forecasting and stochastic control. Thanks to the concept of dissimilarity and by defining an appropriate dissimilarity function, it is shown that a family of predictors can be obtained. First, a predictor to compute nominal forecastings of a time-series or a dynamical system is presented. The effectiveness of the predictor is shown by means of a numerical example, where daily predictions of a stock index are computed. The obtained results turn out to be better than those obtained with popular machine learning techniques like Neural Networks. Similarly, the aforementioned dissimilarity function can be used to compute conditioned probability distributions. By means of the obtained distributions, interval predictions can be made by using the concept of quantiles. However, in order to do that, it is necessary to integrate the distribution for all the possible values of the output. As this numerical integration process is computationally expensive, an alternate method bypassing the computation of the probability distribution is also proposed. Not only is computationally cheaper but it also allows to compute prediction regions, which are the multivariate version of the interval predictions. Both methods present better results than other baseline approaches in a set of examples, including a stock forecasting example and the prediction of the Lorenz attractor. Furthermore, new methods to obtain models of nonlinear systems by means of input-output data are proposed. Two different model approaches are presented: a local data approach and a kernel-based approach. A kalman filter can be added to improve the quality of the predictions. It is shown that the forecasting performance of the proposed models is better than other machine learning methods in several examples, such as the forecasting of the sunspot number and the R¨ossler attractor. Also, as these models are suitable for Model Predictive Control (MPC), new MPC formulations are proposed. Thanks to the distinctive features of the proposed models, the nonlinear MPC problem can be posed as a simple quadratic programming problem. Finally, by means of a simulation example and a real experiment, it is shown that the controller performs adequately. On the other hand, in the field of stochastic control, several methods to bound the constraint violation rate of any controller under the presence of bounded or unbounded disturbances are presented. These can be used, for example, to tune some hyperparameters of the controller. Some simulation examples are proposed in order to show the functioning of the algorithms. One of these examples considers the management of a data center. Here, an energy-efficient MPC-inspired policy is developed in order to reduce the electricity consumption while keeping the quality of service at acceptable levels

    Ultrasound Imaging Innovations for Visualization and Quantification of Vascular Biomarkers

    Get PDF
    The existence of plaque in the carotid arteries, which provide circulation to the brain, is a known risk for stroke and dementia. Alas, this risk factor is present in 25% of the adult population. Proper assessment of carotid plaque may play a significant role in preventing and managing stroke and dementia. However, current plaque assessment routines have known limitations in assessing individual risk for future cardiovascular events. There is a practical need to derive new vascular biomarkers that are indicative of cardiovascular risk based on hemodynamic information. Nonetheless, the derivation of these biomarkers is not a trivial technical task because none of the existing clinical imaging modalities have adequate time resolution to track the spatiotemporal dynamics of arterial blood flow that is pulsatile in nature. The goal of this dissertation is to devise a new ultrasound imaging framework to measure vascular biomarkers related to turbulent flow, intra-plaque microvasculature, and blood flow rate. Central to the proposed framework is the use of high frame rate ultrasound (HiFRUS) imaging principles to track hemodynamic events at fine temporal resolution (through using frame rates of greater than 1000 frames per second). The existence of turbulent flow and intra-plaque microvessels, as well as anomalous blood flow rate, are all closely related to the formation and progression of carotid plaque. Therefore, quantifying these biomarkers can improve the identification of individuals with carotid plaque who are at risk for future cardiovascular events. To facilitate the testing and the implementation of the proposed imaging algorithms, this dissertation has included the development of new experimental models (in the form of flow phantoms) and a new HiFRUS imaging platform with live scanning and on-demand playback functionalities. Pilot studies were also carried out on rats and human volunteers. Results generally demonstrated the real-time performance and the practical efficacy of the proposed algorithms. The proposed ultrasound imaging framework is expected to improve carotid plaque risk classification and, in turn, facilitate timely identification of at-risk individuals. It may also be used to derive new insights on carotid plaque formation and progression to aid disease management and the development of personalized treatment strategies

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    • …
    corecore