65 research outputs found

    Robust computational intelligence techniques for visual information processing

    Get PDF
    The third part is exclusively dedicated to the super-resolution of Magnetic Resonance Images. In one of these works, an algorithm based on the random shifting technique is developed. Besides, we studied noise removal and resolution enhancement simultaneously. To end, the cost function of deep networks has been modified by different combinations of norms in order to improve their training. Finally, the general conclusions of the research are presented and discussed, as well as the possible future research lines that are able to make use of the results obtained in this Ph.D. thesis.This Ph.D. thesis is about image processing by computational intelligence techniques. Firstly, a general overview of this book is carried out, where the motivation, the hypothesis, the objectives, and the methodology employed are described. The use and analysis of different mathematical norms will be our goal. After that, state of the art focused on the applications of the image processing proposals is presented. In addition, the fundamentals of the image modalities, with particular attention to magnetic resonance, and the learning techniques used in this research, mainly based on neural networks, are summarized. To end up, the mathematical framework on which this work is based on, ₚ-norms, is defined. Three different parts associated with image processing techniques follow. The first non-introductory part of this book collects the developments which are about image segmentation. Two of them are applications for video surveillance tasks and try to model the background of a scenario using a specific camera. The other work is centered on the medical field, where the goal of segmenting diabetic wounds of a very heterogeneous dataset is addressed. The second part is focused on the optimization and implementation of new models for curve and surface fitting in two and three dimensions, respectively. The first work presents a parabola fitting algorithm based on the measurement of the distances of the interior and exterior points to the focus and the directrix. The second work changes to an ellipse shape, and it ensembles the information of multiple fitting methods. Last, the ellipsoid problem is addressed in a similar way to the parabola

    Doctor of Philosophy

    Get PDF
    dissertationDiffusion magnetic resonance imaging (dMRI) has become a popular technique to detect brain white matter structure. However, imaging noise, imaging artifacts, and modeling techniques, etc., create many uncertainties, which may generate misleading information for further analysis or applications, such as surgical planning. Therefore, how to analyze, effectively visualize, and reduce these uncertainties become very important research questions. In this dissertation, we present both rank-k decomposition and direct decomposition approaches based on spherical deconvolution to decompose the fiber directions more accurately for high angular resolution diffusion imaging (HARDI) data, which will reduce the uncertainties of the fiber directions. By applying volume rendering techniques to an ensemble of 3D orientation distribution function (ODF) glyphs, which we call SIP functions of diffusion shapes, one can elucidate the complex heteroscedastic structural variation in these local diffusion shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. To better understand the uncertainties in white matter fiber tracks, we propose three metrics to quantify the differences between the results of diffusion tensor magnetic resonance imaging (DT-MRI) fiber tracking algorithms: the area between corresponding fibers of each bundle, the Earth Mover's Distance (EMD) between two fiber bundle volumes, and the current distance between two fiber bundle volumes. Based on these metrics, we discuss an interactive fiber track comparison visualization toolkit we have developed to visualize these uncertainties more efficiently. Physical phantoms, with high repeatability and reproducibility, are also designed with the hope of validating the dMRI techniques. In summary, this dissertation provides a better understanding about uncertainties in diffusion magnetic resonance imaging: where and how much are the uncertainties? How do we reduce these uncertainties? How can we possibly validate our algorithms

    Algebraic Multigrid for Markov Chains and Tensor Decomposition

    Get PDF
    The majority of this thesis is concerned with the development of efficient and robust numerical methods based on adaptive algebraic multigrid to compute the stationary distribution of Markov chains. It is shown that classical algebraic multigrid techniques can be applied in an exact interpolation scheme framework to compute the stationary distribution of irreducible, homogeneous Markov chains. A quantitative analysis shows that algebraically smooth multiplicative error is locally constant along strong connections in a scaled system operator, which suggests that classical algebraic multigrid coarsening and interpolation can be applied to the class of nonsymmetric irreducible singular M-matrices with zero column sums. Acceleration schemes based on fine-level iterant recombination, and over-correction of the coarse-grid correction are developed to improve the rate of convergence and scalability of simple adaptive aggregation multigrid methods for Markov chains. Numerical tests over a wide range of challenging nonsymmetric test problems demonstrate the effectiveness of the proposed multilevel method and the acceleration schemes. This thesis also investigates the application of adaptive algebraic multigrid techniques for computing the canonical decomposition of higher-order tensors. The canonical decomposition is formulated as a least squares optimization problem, for which local minimizers are computed by solving the first-order optimality equations. The proposed multilevel method consists of two phases: an adaptive setup phase that uses a multiplicative correction scheme in conjunction with bootstrap algebraic multigrid interpolation to build the necessary operators on each level, and a solve phase that uses additive correction cycles based on the full approximation scheme to efficiently obtain an accurate solution. The alternating least squares method, which is a standard one-level iterative method for computing the canonical decomposition, is used as the relaxation scheme. Numerical tests show that for certain test problems arising from the discretization of high-dimensional partial differential equations on regular lattices the proposed multilevel method significantly outperforms the standard alternating least squares method when a high level of accuracy is required

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions

    Probabilistic graphical models for mobile pedestrian localization in 3D environments

    Get PDF
    This PhD thesis considers the problem of locating wireless nodes in indoors GPS-denied environments using probabilistic graphical models. Time-of-arrival (ToA) distance observations are assumed with Non-Line-of-Sight (NLoS) communications and a lack of adequate anchors. As a solution cooperative localization is developed using Probabilistic Graphical Models (PGMs). The nodes infer their position in an iterative message-passing algorithm, in a distributed manner, given a set of noisy distance observations and a few anchors. The focus of this thesis is to develop algorithms that decrease computational complexity, while maintaining or improving accuracy. Firstly, we develop the Hybrid Ellipsoid Variational Algorithm (HEVA), which extends probabilistic inference in 3D localization, combining NLoS mitigation for ToA. Simulation results illustrate that HEVA significantly outperforms traditional Non-parametric Belief Propagation (NBP) methods in localization while requires only 50% of their complexity. In addition, we present a novel parametric for Belief Propagation (BP) algorithm. The proposed Grid Belief Propagation (Grid-BP) approach allows extremely fast calculations and works nicely with existing grid-based coordinate systems, e.g. NATO military grid reference system (MGRS). This allows localization using a Global Coordinate System (GCS). Simulation results demonstrate that Grid-BP achieves similar accuracy at much reduced complexity when compared to common techniques. We also present an algorithm that combines Inertial Navigation System (INS) and Pedestrian Dead Reckoning (PDR), namely Probabilistic Hybrid INS/PDR Mobility Tracking Algorithm (PHIMTA), which provides high accuracy tracking for mobile nodes. We combine it with Grid-BP and stop-and-go (SnG) algorithms, showcasing improved accuracy, at very low computational cost. Finally, we present Stochastic Residual Belief Propagation (SR-BP). SR-BP extends the use of Residual Belief Propagation (R-BP) to distributed networks, improving the accuracy, convergence rate, and communication cost. We prove SR-BP convergence to a unique fixed point under conditions similar to those ensuring convergence of asynchronous BP. Finally, numerical results showcase the improvements in convergence speed, message overhead and detection accuracy of SR-BP

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Preserving Measured Structure During Generation and Reduction of Multivariate Point Configurations

    Get PDF
    Inherent in any multivariate data is structure, which describes the general shape and distribution of the underlying point configuration. While there are potentially many types of structure that could be of interest, consider restricting interest to two general types: geometric structure, the general shape of a point configuration, and probabilistic structure, the general distribution of points within the configuration. The ability to quantify geometric structure is an important step in many common statistical analyses. For instance, general neighbourhood structure is captured using a k-nearest neighbour graph in dimension reduction techniques such as isomap and locally-linear embedding. Neighbourhood graphs are also used in sensor network localization, which has applications in fields such as environmental habitat monitoring and wildlife monitoring. Another geometric graph, the convex hull, is also used in wildlife monitoring as a rough estimate of an animal's home range. The identification of areas of high and low density is one example of measuring the probability structure of a configuration, which can be done using a wide variety of methods. One such method is using kernel density estimation, which can be viewed as a weighted sum of nearby points. Kernel density estimation has widely varying applications, including in regression analysis, and is used in general to assess certain features of the data (modality, skewness, etc.). Related to the idea of measuring structure is the concept of "Cognostics", which has been formalized as scatterplot diagnostics (or scagnostics). Scagnostics provides a framework through which interesting structure can be measured in a configuration. The central idea is to numerically summarize the structure of a large number of two-dimensional point configurations via measures calculated on geometric graphs. This allows the interesting views to be quickly identified, and ultimately examined visually, while the views deemed to be uninteresting are simply discarded. While a good starting point, several issues in the current framework need to be addressed. For instance, while each measure is designed to be in [0,1], there are some that, when measured over tens of thousands of configurations, fail to achieve this range. In addition, there is a lot of structure that could be considered interesting that is not captured by the current framework. These issues, among others, will be addressed and rectified so that the current scagnostic framework can continue to be built upon. With tools to measure structure, attention is turned to making use of the structural information contained in the configuration. Consider the problem of preserving measured structure during the task of data aggregation, more commonly known as binning. Existing methods of data aggregation tend to exist on two ends of the structure retention spectrum. Through experimentation, methods such as equal width and hexagonal binning will be shown to tend to retain the shape of the configuration, at the expense of the density, while methods such as equal frequency and random sampling tend to retain relative density at the expense of overall shape. Tree-based binning, a general binning framework inspired by classification and regression trees, is proposed to bridge the gap between these sets of specialist algorithms. GapBin, a specially designed tree-based binning algorithm, will be shown through experimentation to provide a trade-off in low dimensional space between geometric structure retention and probabilistic structure retention. In higher dimensions, it will be shown to be the superior algorithm in terms of structure retention among those considered. Next, the general problem of constructing a configuration with a given underlying structure is considered. For example, the minimal spanning tree is known to carry important clustering information. Of interest then, is the generation of configurations with a given minimal spanning tree structure. The problem of generating a configuration with a known minimal spanning tree is equivalent to completing a Euclidean distance matrix where the only known entries are those in the minimal spanning tree. For this problem, there are several solutions, including those of Alfakih et. al., Fang & O'Leary, and Trosset. None of these algorithms, however, are designed to retain the structure of the minimal spanning tree. In addition, the sparsity of the Euclidean distance matrix containing only the minimal spanning tree results in completions that are not accurate as compared to the known completion. This leads to issues in the point configurations of the resulting completions. To resolve these, two new algorithms are proposed which are designed to retain the structure of the minimal spanning tree, leading to more accurate completions of these sparse matrices. To complement the algorithms presented, implementation of these algorithms in the statistical programming language R will also be discussed. In particular, the R package treebinr for tree-based binning, and edmcr for Euclidean distance matrix completions will be presented

    Integrality and cutting planes in semidefinite programming approaches for combinatorial optimization

    Get PDF
    Many real-life decision problems are discrete in nature. To solve such problems as mathematical optimization problems, integrality constraints are commonly incorporated in the model to reflect the choice of finitely many alternatives. At the same time, it is known that semidefinite programming is very suitable for obtaining strong relaxations of combinatorial optimization problems. In this dissertation, we study the interplay between semidefinite programming and integrality, where a special focus is put on the use of cutting-plane methods. Although the notions of integrality and cutting planes are well-studied in linear programming, integer semidefinite programs (ISDPs) are considered only recently. We show that manycombinatorial optimization problems can be modeled as ISDPs. Several theoretical concepts, such as the Chvátal-Gomory closure, total dual integrality and integer Lagrangian duality, are studied for the case of integer semidefinite programming. On the practical side, we introduce an improved branch-and-cut approach for ISDPs and a cutting-plane augmented Lagrangian method for solving semidefinite programs with a large number of cutting planes. Throughout the thesis, we apply our results to a wide range of combinatorial optimization problems, among which the quadratic cycle cover problem, the quadratic traveling salesman problem and the graph partition problem. Our approaches lead to novel, strong and efficient solution strategies for these problems, with the potential to be extended to other problem classes
    corecore