82 research outputs found

    Multimodal Three Dimensional Scene Reconstruction, The Gaussian Fields Framework

    Get PDF
    The focus of this research is on building 3D representations of real world scenes and objects using different imaging sensors. Primarily range acquisition devices (such as laser scanners and stereo systems) that allow the recovery of 3D geometry, and multi-spectral image sequences including visual and thermal IR images that provide additional scene characteristics. The crucial technical challenge that we addressed is the automatic point-sets registration task. In this context our main contribution is the development of an optimization-based method at the core of which lies a unified criterion that solves simultaneously for the dense point correspondence and transformation recovery problems. The new criterion has a straightforward expression in terms of the datasets and the alignment parameters and was used primarily for 3D rigid registration of point-sets. However it proved also useful for feature-based multimodal image alignment. We derived our method from simple Boolean matching principles by approximation and relaxation. One of the main advantages of the proposed approach, as compared to the widely used class of Iterative Closest Point (ICP) algorithms, is convexity in the neighborhood of the registration parameters and continuous differentiability, allowing for the use of standard gradient-based optimization techniques. Physically the criterion is interpreted in terms of a Gaussian Force Field exerted by one point-set on the other. Such formulation proved useful for controlling and increasing the region of convergence, and hence allowing for more autonomy in correspondence tasks. Furthermore, the criterion can be computed with linear complexity using recently developed Fast Gauss Transform numerical techniques. In addition, we also introduced a new local feature descriptor that was derived from visual saliency principles and which enhanced significantly the performance of the registration algorithm. The resulting technique was subjected to a thorough experimental analysis that highlighted its strength and showed its limitations. Our current applications are in the field of 3D modeling for inspection, surveillance, and biometrics. However, since this matching framework can be applied to any type of data, that can be represented as N-dimensional point-sets, the scope of the method is shown to reach many more pattern analysis applications

    Enhanced SPH modeling of free-surface flows with large deformations

    Get PDF
    The subject of the present thesis is the development of a numerical solver to study the violent interaction of marine flows with rigid structures. Among the many numerical models available, the Smoothed Particle Hydrodynamics (SPH) has been chosen as it proved appropriate in dealing with violent free-surface flows. Due to its Lagrangian and meshless character it can naturally handle breaking waves and fragmentation that generally are not easily treated by standard methods. On the other hand, some consolidated features of mesh-based methods, such as the solid boundary treatment, still remain unsolved issues in the SPH context. In the present work a great part of the research activity has been devoted to tackle some of the bottlenecks of the method. Firstly, an enhanced SPH model, called delta-SPH, has been proposed. In this model, a proper numerical diffusive term has been added in the continuity equation in order to remove the spurious numerical noise in the pressure field which typically affects the weakly-compressible SPH models. Then, particular attention has been paid to the development of suitable techniques for the enforcement of the boundary conditions. As for the free-surface, a specific algorithm has been designed to detect free-surface particles and to define a related level-set function with two main targets: to allow the imposition of peculiar conditions on the free-surface and to analyse and visualize more easily the simulation outcome (especially in 3D cases). Concerning the solid boundary treatment, much effort has been spent to devise new techniques for handling generic body geometries with an adequate accuracy in both 2D and 3D problems. Two different techniques have been described: in the first one the standard ghost fluid method has been extended in order to treat complex solid geometries. Both free-slip and no-slip boundary conditions have been implemented, the latter being a quite complex matter in the SPH context. The proposed boundary treatment proved to be robust and accurate in evaluating local and global loads, though it is not easy to extend to generic 3D surfaces. The second technique has been adopted for these cases. Such a technique has been developed in the context of Riemann-SPH methods and in the present work is reformulated in the context of the standard SPH scheme. The method proved to be robust in treating complex 3D solid surfaces though less accurate than the former. Finally, an algorithm to correctly initialize the SPH simulation in the case of generic geometries has been described. It forces a resettlement of the fluid particles to achieve a regular and uniform spacing even in complex configurations. This pre-processing procedure avoids the generation of spurious currents due to local defects in the particle distribution at the beginning of the simulation. The delta-SPH model has been validated against several problems concerning fluid-structure interactions. Firstly, the capability of the solver in dealing with water impacts has been tested by simulating a jet impinging on a flat plate and a dam-break flow against a vertical wall. In this cases, the accuracy in the prediction of local loads and of the pressure field have been the main focus. Then, the viscous flow around a cylinder, in both steady and unsteady conditions, has been simulated comparing the results with reference solutions. Finally, the generation and propagation of 2D gravity waves has been simulated. Several regimes of propagation have been tested and the results compared against a potential flow solver. The developed numerical solver has been applied to several cases of free-surface flows striking rigid structures and to the problem of the generation and evolution of ship generated waves. In the former case, the robustness of the solver has been challenged by simulating 2D and 3D water impacts against complex solid surfaces. The numerical outcome have been compared with analytical solutions, experimental data and other numerical results and the limits of the model have been discussed. As for the ship generated waves, the problem has been firstly studied within the 2D+t approximation, focusing on the occurrence and features of the breaking bow waves. Then, a dedicated 3D SPH parallel solver has been developed to tackle the simulation of the entire ship in constant forward motion. This simulation is quite demanding in terms of complexities of the boundary geometry and computational resources required. The wave pattern obtained has been compared against experimental data and results from other numerical methods, showing in both the cases a fair and promising agreement

    Extraction of buildings from high-resolution satellite data and airborne LIDAR

    Get PDF
    Automatic building extraction is a difficult object recognition problem due to a high complexity of the scene content and the object representation. There is a dilemma to select appropriate building models to be reconstructed; the models have to be generic in order to represent a variety of building shape, whereas they also have to be specific to differentiate buildings from other objects in the scene. Therefore, a scientific challenge of building extraction lies in constructing a framework for modelling building objects with appropriate balance between generic and specific models. This thesis investigates a synergy of IKONOS satellite imagery and airborne LIDAR data, which have recently emerged as powerful remote sensing tools, and aims to develop an automatic system, which delineates building outlines with more complex shape, but by less use of geometric constraints. The method described in this thesis is a two step procedure: building detection and building description. A method of automatic building detection that can separate individual buildings from surrounding features is presented. The process is realized in a hierarchical strategy, where terrain, trees, and building objects are sequentially detected. Major research efforts are made on the development of a LIDAR filtering technique, which automatically detects terrain surfaces from a cloud of 3D laser points. The thesis also proposes a method of building description to automatically reconstruct building boundaries. A building object is generally represented as a mosaic of convex polygons. The first stage is to generate polygonal cues by a recursive intersection of both datadriven and model-driven linear features extracted from IKONOS imagery and LIDAR data. The second stage is to collect relevant polygons comprising the building object and to merge them for reconstructing the building outlines. The developed LIDAR filter was tested in a range of different landforms, and showed good results to meet most of the requirements of DTM generation and building detection. Also, the implemented building extraction system was able to successfully reconstruct the building outlines, and the accuracy of the building extraction is good enough for mapping purposes.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Extraction of buildings from high-resolution satellite data and airborne Lidar

    Get PDF
    Automatic building extraction is a difficult object recognition problem due to a high complexity of the scene content and the object representation. There is a dilemma to select appropriate building models to be reconstructed; the models have to be generic in order to represent a variety of building shape, whereas they also have to be specific to differentiate buildings from other objects in the scene. Therefore, a scientific challenge of building extraction lies in constructing a framework for modelling building objects with appropriate balance between generic and specific models. This thesis investigates a synergy of IKONOS satellite imagery and airborne LIDAR data, which have recently emerged as powerful remote sensing tools, and aims to develop an automatic system, which delineates building outlines with more complex shape, but by less use of geometric constraints. The method described in this thesis is a two step procedure: building detection and building description. A method of automatic building detection that can separate individual buildings from surrounding features is presented. The process is realized in a hierarchical strategy, where terrain, trees, and building objects are sequentially detected. Major research efforts are made on the development of a LIDAR filtering technique, which automatically detects terrain surfaces from a cloud of 3D laser points. The thesis also proposes a method of building description to automatically reconstruct building boundaries. A building object is generally represented as a mosaic of convex polygons. The first stage is to generate polygonal cues by a recursive intersection of both datadriven and model-driven linear features extracted from IKONOS imagery and LIDAR data. The second stage is to collect relevant polygons comprising the building object and to merge them for reconstructing the building outlines. The developed LIDAR filter was tested in a range of different landforms, and showed good results to meet most of the requirements of DTM generation and building detection. Also, the implemented building extraction system was able to successfully reconstruct the building outlines, and the accuracy of the building extraction is good enough for mapping purposes

    Frameworks to Investigate Robustness and Disease Characterization/Prediction Utility of Time-Varying Functional Connectivity State Profiles of the Human Brain at Rest

    Get PDF
    Neuroimaging technologies aim at delineating the highly complex structural and functional organization of the human brain. In recent years, several unimodal as well as multimodal analyses of structural MRI (sMRI) and functional MRI (fMRI) neuroimaging modalities, leveraging advanced signal processing and machine learning based feature extraction algorithms, have opened new avenues in diagnosis of complex brain syndromes and neurocognitive disorders. Generically regarding these neuroimaging modalities as filtered, complimentary insights of brain’s anatomical and functional organization, multimodal data fusion efforts could enable more comprehensive mapping of brain structure and function. Large scale functional organization of the brain is often studied by viewing the brain as a complex, integrative network composed of spatially distributed, but functionally interacting, sub-networks that continually share and process information. Such whole-brain functional interactions, also referred to as patterns of functional connectivity (FC), are typically examined as levels of synchronous co-activation in the different functional networks of the brain. More recently, there has been a major paradigm shift from measuring the whole-brain FC in an oversimplified, time-averaged manner to additional exploration of time-varying mechanisms to identify the recurring, transient brain configurations or brain states, referred to as time-varying FC state profiles in this dissertation. Notably, prior studies based on time-varying FC approaches have made use of these relatively lower dimensional fMRI features to characterize pathophysiology and have also been reported to relate to demographic characterization, consciousness levels and cognition. In this dissertation, we corroborate the efficacy of time-varying FC state profiles of the human brain at rest by implementing statistical frameworks to evaluate their robustness and statistical significance through an in-depth, novel evaluation on multiple, independent partitions of a very large rest-fMRI dataset, as well as extensive validation testing on surrogate rest-fMRI datasets. In the following, we present a novel data-driven, blind source separation based multimodal (sMRI-fMRI) data fusion framework that uses the time-varying FC state profiles as features from the fMRI modality to characterize diseased brain conditions and substantiate brain structure-function relationships. Finally, we present a novel data-driven, deep learning based multimodal (sMRI-fMRI) data fusion framework that examines the degree of diagnostic and prognostic performance improvement based on time-varying FC state profiles as features from the fMRI modality. The approaches developed and tested in this dissertation evince high levels of robustness and highlight the utility of time-varying FC state profiles as potential biomarkers to characterize, diagnose and predict diseased brain conditions. As such, the findings in this work argue in favor of the view of FC investigations of the brain that are centered on time-varying FC approaches, and also highlight the benefits of combining multiple neuroimaging data modalities via data fusion

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Discriminant feature pursuit: from statistical learning to informative learning.

    Get PDF
    Lin Dahua.Thesis (M.Phil.)--Chinese University of Hong Kong, 2006.Includes bibliographical references (leaves 233-250).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- The Problem We are Facing --- p.1Chapter 1.2 --- Generative vs. Discriminative Models --- p.2Chapter 1.3 --- Statistical Feature Extraction: Success and Challenge --- p.3Chapter 1.4 --- Overview of Our Works --- p.5Chapter 1.4.1 --- New Linear Discriminant Methods: Generalized LDA Formulation and Performance-Driven Sub space Learning --- p.5Chapter 1.4.2 --- Coupled Learning Models: Coupled Space Learning and Inter Modality Recognition --- p.6Chapter 1.4.3 --- Informative Learning Approaches: Conditional Infomax Learning and Information Chan- nel Model --- p.6Chapter 1.5 --- Organization of the Thesis --- p.8Chapter I --- History and Background --- p.10Chapter 2 --- Statistical Pattern Recognition --- p.11Chapter 2.1 --- Patterns and Classifiers --- p.11Chapter 2.2 --- Bayes Theory --- p.12Chapter 2.3 --- Statistical Modeling --- p.14Chapter 2.3.1 --- Maximum Likelihood Estimation --- p.14Chapter 2.3.2 --- Gaussian Model --- p.15Chapter 2.3.3 --- Expectation-Maximization --- p.17Chapter 2.3.4 --- Finite Mixture Model --- p.18Chapter 2.3.5 --- A Nonparametric Technique: Parzen Windows --- p.21Chapter 3 --- Statistical Learning Theory --- p.24Chapter 3.1 --- Formulation of Learning Model --- p.24Chapter 3.1.1 --- Learning: Functional Estimation Model --- p.24Chapter 3.1.2 --- Representative Learning Problems --- p.25Chapter 3.1.3 --- Empirical Risk Minimization --- p.26Chapter 3.2 --- Consistency and Convergence of Learning --- p.27Chapter 3.2.1 --- Concept of Consistency --- p.27Chapter 3.2.2 --- The Key Theorem of Learning Theory --- p.28Chapter 3.2.3 --- VC Entropy --- p.29Chapter 3.2.4 --- Bounds on Convergence --- p.30Chapter 3.2.5 --- VC Dimension --- p.35Chapter 4 --- History of Statistical Feature Extraction --- p.38Chapter 4.1 --- Linear Feature Extraction --- p.38Chapter 4.1.1 --- Principal Component Analysis (PCA) --- p.38Chapter 4.1.2 --- Linear Discriminant Analysis (LDA) --- p.41Chapter 4.1.3 --- Other Linear Feature Extraction Methods --- p.46Chapter 4.1.4 --- Comparison of Different Methods --- p.48Chapter 4.2 --- Enhanced Models --- p.49Chapter 4.2.1 --- Stochastic Discrimination and Random Subspace --- p.49Chapter 4.2.2 --- Hierarchical Feature Extraction --- p.51Chapter 4.2.3 --- Multilinear Analysis and Tensor-based Representation --- p.52Chapter 4.3 --- Nonlinear Feature Extraction --- p.54Chapter 4.3.1 --- Kernelization --- p.54Chapter 4.3.2 --- Dimension reduction by Manifold Embedding --- p.56Chapter 5 --- Related Works in Feature Extraction --- p.59Chapter 5.1 --- Dimension Reduction --- p.59Chapter 5.1.1 --- Feature Selection --- p.60Chapter 5.1.2 --- Feature Extraction --- p.60Chapter 5.2 --- Kernel Learning --- p.61Chapter 5.2.1 --- Basic Concepts of Kernel --- p.61Chapter 5.2.2 --- The Reproducing Kernel Map --- p.62Chapter 5.2.3 --- The Mercer Kernel Map --- p.64Chapter 5.2.4 --- The Empirical Kernel Map --- p.65Chapter 5.2.5 --- Kernel Trick and Kernelized Feature Extraction --- p.66Chapter 5.3 --- Subspace Analysis --- p.68Chapter 5.3.1 --- Basis and Subspace --- p.68Chapter 5.3.2 --- Orthogonal Projection --- p.69Chapter 5.3.3 --- Orthonormal Basis --- p.70Chapter 5.3.4 --- Subspace Decomposition --- p.70Chapter 5.4 --- Principal Component Analysis --- p.73Chapter 5.4.1 --- PCA Formulation --- p.73Chapter 5.4.2 --- Solution to PCA --- p.75Chapter 5.4.3 --- Energy Structure of PCA --- p.76Chapter 5.4.4 --- Probabilistic Principal Component Analysis --- p.78Chapter 5.4.5 --- Kernel Principal Component Analysis --- p.81Chapter 5.5 --- Independent Component Analysis --- p.83Chapter 5.5.1 --- ICA Formulation --- p.83Chapter 5.5.2 --- Measurement of Statistical Independence --- p.84Chapter 5.6 --- Linear Discriminant Analysis --- p.85Chapter 5.6.1 --- Fisher's Linear Discriminant Analysis --- p.85Chapter 5.6.2 --- Improved Algorithms for Small Sample Size Problem . --- p.89Chapter 5.6.3 --- Kernel Discriminant Analysis --- p.92Chapter II --- Improvement in Linear Discriminant Analysis --- p.100Chapter 6 --- Generalized LDA --- p.101Chapter 6.1 --- Regularized LDA --- p.101Chapter 6.1.1 --- Generalized LDA Implementation Procedure --- p.101Chapter 6.1.2 --- Optimal Nonsingular Approximation --- p.103Chapter 6.1.3 --- Regularized LDA algorithm --- p.104Chapter 6.2 --- A Statistical View: When is LDA optimal? --- p.105Chapter 6.2.1 --- Two-class Gaussian Case --- p.106Chapter 6.2.2 --- Multi-class Cases --- p.107Chapter 6.3 --- Generalized LDA Formulation --- p.108Chapter 6.3.1 --- Mathematical Preparation --- p.108Chapter 6.3.2 --- Generalized Formulation --- p.110Chapter 7 --- Dynamic Feedback Generalized LDA --- p.112Chapter 7.1 --- Basic Principle --- p.112Chapter 7.2 --- Dynamic Feedback Framework --- p.113Chapter 7.2.1 --- Initialization: K-Nearest Construction --- p.113Chapter 7.2.2 --- Dynamic Procedure --- p.115Chapter 7.3 --- Experiments --- p.115Chapter 7.3.1 --- Performance in Training Stage --- p.116Chapter 7.3.2 --- Performance on Testing set --- p.118Chapter 8 --- Performance-Driven Subspace Learning --- p.119Chapter 8.1 --- Motivation and Principle --- p.119Chapter 8.2 --- Performance-Based Criteria --- p.121Chapter 8.2.1 --- The Verification Problem and Generalized Average Margin --- p.122Chapter 8.2.2 --- Performance Driven Criteria based on Generalized Average Margin --- p.123Chapter 8.3 --- Optimal Subspace Pursuit --- p.125Chapter 8.3.1 --- Optimal threshold --- p.125Chapter 8.3.2 --- Optimal projection matrix --- p.125Chapter 8.3.3 --- Overall procedure --- p.129Chapter 8.3.4 --- Discussion of the Algorithm --- p.129Chapter 8.4 --- Optimal Classifier Fusion --- p.130Chapter 8.5 --- Experiments --- p.131Chapter 8.5.1 --- Performance Measurement --- p.131Chapter 8.5.2 --- Experiment Setting --- p.131Chapter 8.5.3 --- Experiment Results --- p.133Chapter 8.5.4 --- Discussion --- p.139Chapter III --- Coupled Learning of Feature Transforms --- p.140Chapter 9 --- Coupled Space Learning --- p.141Chapter 9.1 --- Introduction --- p.142Chapter 9.1.1 --- What is Image Style Transform --- p.142Chapter 9.1.2 --- Overview of our Framework --- p.143Chapter 9.2 --- Coupled Space Learning --- p.143Chapter 9.2.1 --- Framework of Coupled Modelling --- p.143Chapter 9.2.2 --- Correlative Component Analysis --- p.145Chapter 9.2.3 --- Coupled Bidirectional Transform --- p.148Chapter 9.2.4 --- Procedure of Coupled Space Learning --- p.151Chapter 9.3 --- Generalization to Mixture Model --- p.152Chapter 9.3.1 --- Coupled Gaussian Mixture Model --- p.152Chapter 9.3.2 --- Optimization by EM Algorithm --- p.152Chapter 9.4 --- Integrated Framework for Image Style Transform --- p.154Chapter 9.5 --- Experiments --- p.156Chapter 9.5.1 --- Face Super-resolution --- p.156Chapter 9.5.2 --- Portrait Style Transforms --- p.157Chapter 10 --- Inter-Modality Recognition --- p.162Chapter 10.1 --- Introduction to the Inter-Modality Recognition Problem . . . --- p.163Chapter 10.1.1 --- What is Inter-Modality Recognition --- p.163Chapter 10.1.2 --- Overview of Our Feature Extraction Framework . . . . --- p.163Chapter 10.2 --- Common Discriminant Feature Extraction --- p.165Chapter 10.2.1 --- Formulation of the Learning Problem --- p.165Chapter 10.2.2 --- Matrix-Form of the Objective --- p.168Chapter 10.2.3 --- Solving the Linear Transforms --- p.169Chapter 10.3 --- Kernelized Common Discriminant Feature Extraction --- p.170Chapter 10.4 --- Multi-Mode Framework --- p.172Chapter 10.4.1 --- Multi-Mode Formulation --- p.172Chapter 10.4.2 --- Optimization Scheme --- p.174Chapter 10.5 --- Experiments --- p.176Chapter 10.5.1 --- Experiment Settings --- p.176Chapter 10.5.2 --- Experiment Results --- p.177Chapter IV --- A New Perspective: Informative Learning --- p.180Chapter 11 --- Toward Information Theory --- p.181Chapter 11.1 --- Entropy and Mutual Information --- p.181Chapter 11.1.1 --- Entropy --- p.182Chapter 11.1.2 --- Relative Entropy (Kullback Leibler Divergence) --- p.184Chapter 11.2 --- Mutual Information --- p.184Chapter 11.2.1 --- Definition of Mutual Information --- p.184Chapter 11.2.2 --- Chain rules --- p.186Chapter 11.2.3 --- Information in Data Processing --- p.188Chapter 11.3 --- Differential Entropy --- p.189Chapter 11.3.1 --- Differential Entropy of Continuous Random Variable . --- p.189Chapter 11.3.2 --- Mutual Information of Continuous Random Variable . --- p.190Chapter 12 --- Conditional Infomax Learning --- p.191Chapter 12.1 --- An Overview --- p.192Chapter 12.2 --- Conditional Informative Feature Extraction --- p.193Chapter 12.2.1 --- Problem Formulation and Features --- p.193Chapter 12.2.2 --- The Information Maximization Principle --- p.194Chapter 12.2.3 --- The Information Decomposition and the Conditional Objective --- p.195Chapter 12.3 --- The Efficient Optimization --- p.197Chapter 12.3.1 --- Discrete Approximation Based on AEP --- p.197Chapter 12.3.2 --- Analysis of Terms and Their Derivatives --- p.198Chapter 12.3.3 --- Local Active Region Method --- p.200Chapter 12.4 --- Bayesian Feature Fusion with Sparse Prior --- p.201Chapter 12.5 --- The Integrated Framework for Feature Learning --- p.202Chapter 12.6 --- Experiments --- p.203Chapter 12.6.1 --- A Toy Problem --- p.203Chapter 12.6.2 --- Face Recognition --- p.204Chapter 13 --- Channel-based Maximum Effective Information --- p.209Chapter 13.1 --- Motivation and Overview --- p.209Chapter 13.2 --- Maximizing Effective Information --- p.211Chapter 13.2.1 --- Relation between Mutual Information and Classification --- p.211Chapter 13.2.2 --- Linear Projection and Metric --- p.212Chapter 13.2.3 --- Channel Model and Effective Information --- p.213Chapter 13.2.4 --- Parzen Window Approximation --- p.216Chapter 13.3 --- Parameter Optimization on Grassmann Manifold --- p.217Chapter 13.3.1 --- Grassmann Manifold --- p.217Chapter 13.3.2 --- Conjugate Gradient Optimization on Grassmann Manifold --- p.219Chapter 13.3.3 --- Computation of Gradient --- p.221Chapter 13.4 --- Experiments --- p.222Chapter 13.4.1 --- A Toy Problem --- p.222Chapter 13.4.2 --- Face Recognition --- p.223Chapter 14 --- Conclusion --- p.23

    Fusion of computed point clouds and integral-imaging concepts for full-parallax 3D display

    Get PDF
    During the last century, various technologies of 3D image capturing and visualization have spotlighted, due to both their pioneering nature and the aspiration to extend the applications of conventional 2D imaging technology to 3D scenes. Besides, thanks to advances in opto-electronic imaging technologies, the possibilities of capturing and transmitting 2D images in real-time have progressed significantly, and boosted the growth of 3D image capturing, processing, transmission and as well as display techniques. Among the latter, integral-imaging technology has been considered as one of the promising ones to restore real 3D scenes through the use of a multi-view visualization system that provides to observers with a sense of immersive depth. Many research groups and companies have researched this novel technique with different approaches, and occasions for various complements. In this work, we followed this trend, but processed through our novel strategies and algorithms. Thus, we may say that our approach is innovative, when compared to conventional proposals. The main objective of our research is to develop techniques that allow recording and simulating the natural scene in 3D by using several cameras which have different types and characteristics. Then, we compose a dense 3D scene from the computed 3D data by using various methods and techniques. Finally, we provide a volumetric scene which is restored with great similarity to the original shape, through a comprehensive 3D monitor and/or display system. Our Proposed integral-imaging monitor shows an immersive experience to multiple observers. In this thesis we address the challenges of integral image production techniques based on the computerized 3D information, and we focus in particular on the implementation of full-parallax 3D display system. We have also made progress in overcoming the limitations of the conventional integral-imaging technique. In addition, we have developed different refinement methodologies and restoration strategies for the composed depth information. Finally, we have applied an adequate solution that reduces the computation times significantly, associated with the repetitive calculation phase in the generation of an integral image. All these results are presented by the corresponding images and proposed display experiments

    Cortical Surface Registration and Shape Analysis

    Get PDF
    A population analysis of human cortical morphometry is critical for insights into brain development or degeneration. Such an analysis allows for investigating sulcal and gyral folding patterns. In general, such a population analysis requires both a well-established cortical correspondence and a well-defined quantification of the cortical morphometry. The highly folded and convoluted structures render a reliable and consistent population analysis challenging. Three key challenges have been identified for such an analysis: 1) consistent sulcal landmark extraction from the cortical surface to guide better cortical correspondence, 2) a correspondence establishment for a reliable and stable population analysis, and 3) quantification of the cortical folding in a more reliable and biologically meaningful fashion. The main focus of this dissertation is to develop a fully automatic pipeline that supports a population analysis of local cortical folding changes. My proposed pipeline consists of three novel components I developed to overcome the challenges in the population analysis: 1) automatic sulcal curve extraction for stable/reliable anatomical landmark selection, 2) group-wise registration for establishing cortical shape correspondence across a population with no template selection bias, and 3) quantification of local cortical folding using a novel cortical-shape-adaptive kernel. To evaluate my methodological contributions, I applied all of them in an application to early postnatal brain development. I studied the human cortical morphological development using the proposed quantification of local cortical folding from neonate age to 1 year and 2 years of age, with quantitative developmental assessments. This study revealed a novel pattern of associations between the cortical gyrification and cognitive development.Doctor of Philosoph
    • …
    corecore