55 research outputs found

    A Data-Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition

    Full text link
    The Koopman operator is a linear but infinite dimensional operator that governs the evolution of scalar observables defined on the state space of an autonomous dynamical system, and is a powerful tool for the analysis and decomposition of nonlinear dynamical systems. In this manuscript, we present a data driven method for approximating the leading eigenvalues, eigenfunctions, and modes of the Koopman operator. The method requires a data set of snapshot pairs and a dictionary of scalar observables, but does not require explicit governing equations or interaction with a "black box" integrator. We will show that this approach is, in effect, an extension of Dynamic Mode Decomposition (DMD), which has been used to approximate the Koopman eigenvalues and modes. Furthermore, if the data provided to the method are generated by a Markov process instead of a deterministic dynamical system, the algorithm approximates the eigenfunctions of the Kolmogorov backward equation, which could be considered as the "stochastic Koopman operator" [1]. Finally, four illustrative examples are presented: two that highlight the quantitative performance of the method when presented with either deterministic or stochastic data, and two that show potential applications of the Koopman eigenfunctions

    Highly Parallel Geometric Characterization and Visualization of Volumetric Data Sets

    Get PDF
    Volumetric 3D data sets are being generated in many different application areas. Some examples are CAT scans and MRI data, 3D models of protein molecules represented by implicit surfaces, multi-dimensional numeric simulations of plasma turbulence, and stacks of confocal microscopy images of cells. The size of these data sets has been increasing, requiring the speed of analysis and visualization techniques to also increase to keep up. Recent advances in processor technology have stopped increasing clock speed and instead begun increasing parallelism, resulting in multi-core CPUS and many-core GPUs. To take advantage of these new parallel architectures, algorithms must be explicitly written to exploit parallelism. In this thesis we describe several algorithms and techniques for volumetric data set analysis and visualization that are amenable to these modern parallel architectures. We first discuss modeling volumetric data with Gaussian Radial Basis Functions (RBFs). RBF representation of a data set has several advantages, including lossy compression, analytic differentiability, and analytic application of Gaussian blur. We also describe a parallel volume rendering algorithm that can create images of the data directly from the RBF representation. Next we discuss a parallel, stochastic algorithm for measuring the surface area of volumetric representations of molecules. The algorithm is suitable for implementation on a GPU and is also progressive, allowing it to return a rough answer almost immediately and refine the answer over time to the desired level of accuracy. After this we discuss the concept of Confluent Visualization, which allows the visualization of the interaction between a pair of volumetric data sets. The interaction is visualized through volume rendering, which is well suited to implementation on parallel architectures. Finally we discuss a parallel, stochastic algorithm for classifying stem cells as having been grown on a surface that induces differentiation or on a surface that does not induce differentiation. The algorithm takes as input 3D volumetric models of the cells generated from confocal microscopy. This algorithm builds on our algorithm for surface area measurement and, like that algorithm, this algorithm is also suitable for implementation on a GPU and is progressive

    Dynamic reconstruction of sea clutter

    Get PDF
    Master'sMASTER OF ENGINEERIN

    2nd International Conference on Numerical and Symbolic Computation

    Get PDF
    The Organizing Committee of SYMCOMP2015 – 2nd International Conference on Numerical and Symbolic Computation: Developments and Applications welcomes all the participants and acknowledge the contribution of the authors to the success of this event. This Second International Conference on Numerical and Symbolic Computation, is promoted by APMTAC - Associação Portuguesa de Mecânica Teórica, Aplicada e Computacional and it was organized in the context of IDMEC/IST - Instituto de Engenharia Mecânica. With this ECCOMAS Thematic Conference it is intended to bring together academic and scientific communities that are involved with Numerical and Symbolic Computation in the most various scientific area

    Radial Basis Function Neural Network Based on an Improved Exponential Decreasing Inertia Weight-Particle Swarm Optimization Algorithm for AQI Prediction

    Get PDF
    This paper proposed a novel radial basis function (RBF) neural network model optimized by exponential decreasing inertia weight particle swarm optimization (EDIW-PSO). Based on the inertia weight decreasing strategy, we propose a new Exponential Decreasing Inertia Weight (EDIW) to improve the PSO algorithm. We use the modified EDIW-PSO algorithm to determine the centers, widths, and connection weights of RBF neural network. To assess the performance of the proposed EDIW-PSO-RBF model, we choose the daily air quality index (AQI) of Xi’an for prediction and obtain improved results

    High-dimensional data driven parameterized macromodeling

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Quasi-optimization of Neuro-fuzzy Expert Systems using Asymptotic Least-squares and Modified Radial Basis Function Models: Intelligent Planning of Operational Research Problems

    Get PDF
    The uncertainty found in many industrialization systems poses a significant challenge; partic-ularly in modelling production planning and optimizing manufacturing flow. In aggregate production planning, a key requirement is an ability to accurately predict demand from a range of influencing factors, such as consumption for example. Accurately building such causal models can be problematic if significant uncertainties are present, such as when the data are fuzzy, uncertain, fluctuate and are non-linear. AI models, such as Adaptive Neuro-Fuzzy Inference Systems (ANFIS), can cope with this better than most but even these well-established approaches fail if the data is scarce, poorly scaled and noisy. ANFIS is a combination of two approaches; Sugeno-type Fuzzy Inference System (FIS)and Artificial Neural Networks (ANN). Two sets of parameters are required to define the model: premise parameters and consequent parameters. Together, they ensure that the correct number and shape of membership functions are used and combined to produce reliable outputs. However, optimally determining values for these parameters can only happen if there are enough data samples representing the problem space to ensure that the method can converge. Mitigation strategies are suggested in the literature, such as fixing the premise parameters to avoid over-fitting, but, for many practitioners, this is not an adequate solution, as their expertise lies in the application domain, not in the AI domain. The work presented here is motivated by a real-world challenge in modelling and pre-dicting demand for the gasoline industry in Iraq, an application where both the quality and quantity of the training data can significantly affect prediction accuracy. To overcome data scarcity, we propose novel data expansion algorithms that are able to augment the original data with new samples drawn from the same distribution. By using a combination of carefully chosen and suitably modified radial basis function models, we show how robust methods can overcome problems of over-smoothing at boundary values and turning points. We further show how transformed least-squares (TLS) approximation of the data can be constructed to asymptotically bound the effect of outliers to enable accurate data expansion to take place. Though the problem of scaling/normalization is well understood in some AI applications, we assess the impact on model accuracy for two specific scaling techniques. By comparing and contrasting a range of data scaling and data expansion methods, we can evaluate their effectiveness in reducing prediction error. Throughout this work, the various methods are explained and expanded upon using the case study drawn from the oil and gas industry in Iraq which focuses on the accurate prediction of yearly gasoline consumption. This case study, and others are used to demonstrate, empirically, the effectiveness of the approaches presented when compared to current state of the art. Finally, we present a tool developed in Matlab to allow practitioners to experiment with all methods and options presented in this work

    3D-3D Deformable Registration and Deep Learning Segmentation based Neck Diseases Analysis in MRI

    Full text link
    Whiplash, cervical dystonia (CD), neck pain and work-related upper limb disorder (WRULD) are the most common diseases in the cervical region. Headaches, stiffness, sensory disturbance to the legs and arms, optical problems, aching in the back and shoulder, and auditory and visual problems are common symptoms seen in patients with these diseases. CD patients may also suffer tormenting spasticity in some neck muscles, with the symptoms possibly being acute and persisting for a long time, sometimes a lifetime. Whiplash-associated disorders (WADs) may occur due to sudden forward and backward movements of the head and neck occurring during a sporting activity or vehicle or domestic accident. These diseases affect private industries, insurance companies and governments, with the socio-economic costs significantly related to work absences, long-term sick leave, early disability and disability support pensions, health care expenses, reduced productivity and insurance claims. Therefore, diagnosing and treating neck-related diseases are important issues in clinical practice. The reason for these afflictions resulting from accident is the impairment of the cervical muscles which undergo atrophy or pseudo-hypertrophy due to fat infiltrating into them. These morphological changes have to be determined by identifying and quantifying their bio-markers before applying any medical intervention. Volumetric studies of neck muscles are reliable indicators of the proper treatments to apply. Radiation therapy, chemotherapy, injection of a toxin or surgery could be possible ways of treating these diseases. However, the dosages required should be precise because the neck region contains some sensitive organs, such as nerves, blood vessels and the trachea and spinal cord. Image registration and deep learning-based segmentation can help to determine appropriate treatments by analyzing the neck muscles. However, this is a challenging task for medical images due to complexities such as many muscles crossing multiple joints and attaching to many bones. Also, their shapes and sizes vary greatly across populations whereas their cross-sectional areas (CSAs) do not change in proportion to the heights and weights of individuals, with their sizes varying more significantly between males and females than ages. Therefore, the neck's anatomical variabilities are much greater than those of other parts of the human body. Some other challenges which make analyzing neck muscles very difficult are their compactness, similar gray-level appearances, intra-muscular fat, sliding due to cardiac and respiratory motions, false boundaries created by intramuscular fat, low resolution and contrast in medical images, noise, inhomogeneity and background clutter with the same composition and intensity. Furthermore, a patient's mode, position and neck movements during the capture of an image create variability. However, very little significant research work has been conducted on analyzing neck muscles. Although previous image registration efforts form a strong basis for many medical applications, none can satisfy the requirements of all of them because of the challenges associated with their implementation and low accuracy which could be due to anatomical complexities and variabilities or the artefacts of imaging devices. In existing methods, multi-resolution- and heuristic-based methods are popular. However, the above issues cause conventional multi-resolution-based registration methods to be trapped in local minima due to their low degrees of freedom in their geometrical transforms. Although heuristic-based methods are good at handling large mismatches, they require pre-segmentation and are computationally expensive. Also, current deformable methods often face statistical instability problems and many local optima when dealing with small mismatches. On the other hand, deep learning-based methods have achieved significant success over the last few years. Although a deeper network can learn more complex features and yields better performances, its depth cannot be increased as this would cause the gradient to vanish during training and result in training difficulties. Recently, researchers have focused on attention mechanisms for deep learning but current attention models face a challenge in the case of an application with compact and similar small multiple classes, large variability, low contrast and noise. The focus of this dissertation is on the design of 3D-3D image registration approaches as well as deep learning-based semantic segmentation methods for analyzing neck muscles. In the first part of this thesis, a novel object-constrained hierarchical registration framework for aligning inter-subject neck muscles is proposed. Firstly, to handle large-scale local minima, it uses a coarse registration technique which optimizes a new edge position difference (EPD) similarity measure to align large mismatches. Also, a new transformation based on the discrete periodic spline wavelet (DPSW), affine and free-form-deformation (FFD) transformations are exploited. Secondly, to avoid the monotonous nature of using transformations in multiple stages, affine registration technique, which uses a double-pushing system by changing the edges in the EPD and switching the transformation's resolutions, is designed to align small mismatches. The EPD helps in both the coarse and fine techniques to implement object-constrained registration via controlling edges which is not possible using traditional similarity measures. Experiments are performed on clinical 3D magnetic resonance imaging (MRI) scans of the neck, with the results showing that the EPD is more effective than the mutual information (MI) and the sum of squared difference (SSD) measures in terms of the volumetric dice similarity coefficient (DSC). Also, the proposed method is compared with two state-of-the-art approaches with ablation studies of inter-subject deformable registration and achieves better accuracy, robustness and consistency. However, as this method is computationally complex and has a problem handling large-scale anatomical variabilities, another 3D-3D registration framework with two novel contributions is proposed in the second part of this thesis. Firstly, a two-stage heuristic search optimization technique for handling large mismatches,which uses a minimal user hypothesis regarding these mismatches and is computationally fast, is introduced. It brings a moving image hierarchically closer to a fixed one using MI and EPD similarity measures in the coarse and fine stages, respectively, while the images do not require pre-segmentation as is necessary in traditional heuristic optimization-based techniques. Secondly, a region of interest (ROI) EPD-based registration framework for handling small mismatches using salient anatomical information (AI), in which a convex objective function is formed through a unique shape created from the desired objects in the ROI, is proposed. It is compared with two state-of-the-art methods on a neck dataset, with the results showing that it is superior in terms of accuracy and is computationally fast. In the last part of this thesis, an evaluation study of recent U-Net-based convolutional neural networks (CNNs) is performed on a neck dataset. It comprises 6 recent models, the U-Net, U-Net with a conditional random field (CRF-Unet), attention U-Net (A-Unet), nested U-Net or U-Net++, multi-feature pyramid (MFP)-Unet and recurrent residual U-Net (R2Unet) and 4 with more comprehensive modifications, the multi-scale U-Net (MS-Unet), parallel multi-scale U-Net (PMSUnet), recurrent residual attention U-Net (R2A-Unet) and R2A-Unet++ in neck muscles segmentation, with analyses of the numerical results indicating that the R2Unet architecture achieves the best accuracy. Also, two deep learning-based semantic segmentation approaches are proposed. In the first, a new two-stage U-Net++ (TS-UNet++) uses two different types of deep CNNs (DCNNs) rather than one similar to the traditional multi-stage method, with the U-Net++ in the first stage and the U-Net in the second. More convolutional blocks are added after the input and before the output layers of the multi-stage approach to better extract the low- and high-level features. A new concatenation-based fusion structure, which is incorporated in the architecture to allow deep supervision, helps to increase the depth of the network without accelerating the gradient-vanishing problem. Then, more convolutional layers are added after each concatenation of the fusion structure to extract more representative features. The proposed network is compared with the U-Net, U-Net++ and two-stage U-Net (TS-UNet) on the neck dataset, with the results indicating that it outperforms the others. In the second approach, an explicit attention method, in which the attention is performed through a ROI evolved from ground truth via dilation, is proposed. It does not require any additional CNN, as does a cascaded approach, to localize the ROI. Attention in a CNN is sensitive with respect to the area of the ROI. This dilated ROI is more capable of capturing relevant regions and suppressing irrelevant ones than a bounding box and region-level coarse annotation, and is used during training of any CNN. Coarse annotation, which does not require any detailed pixel wise delineation that can be performed by any novice person, is used during testing. This proposed ROI-based attention method, which can handle compact and similar small multiple classes with objects with large variabilities, is compared with the automatic A-Unet and U-Net, and performs best
    corecore