914 research outputs found

    Sampling—50 Years After Shannon

    Get PDF
    This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we re-interpret Shannon's sampling procedure as an orthogonal projection onto the subspace of bandlimited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of "shift-invariant" functions spaces, including splines and wavelets. Practically, this allows for simpler—and possibly more realistic—interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) pre-filters that are not necessarily ideal lowpass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., non-bandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multi-wavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned

    wavelet domain inversion and joint deconvolution/interpolation of geophysical data

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2003.Includes bibliographical references (leaves 168-174).This thesis presents two innovations to geophysical inversion. The first provides a framework and an algorithm for combining linear deconvolution methods with geostatistical interpolation techniques. This allows for sparsely sampled data to aid in image deblurring problems, or, conversely, noisy and blurred data to aid in sample interpolation. In order to overcome difficulties arising from high dimensionality, the solution must be derived in the correct framework and the structure of the problem must be exploited by an iterative solution algorithm. The effectiveness of the method is demonstrated first on a synthetic problem involving satellite remotely sensed data, and then on a real 3-D seismic data set combined with well logs. The second innovation addresses how to use wavelets in a linear geophysical inverse problem. Wavelets have lead to great successes in image compression and denoising, so it is interesting to see what, if anything, they can do for a general linear inverse problem. It is shown that a simple nonlinear operation of weighting and thresholding wavelet coefficients can consistently outperform classical linear inverse methods in terms of mean-square error across a broad range of noise magnitude in the data. Wavelets allow for an adaptively smoothed solution: smoothed more in uninteresting regions, less at geologically important transitions.(cont.) A third issue is also addressed, somewhat separate from the first two: the correct manipulation of discrete geophysical data. The theory of fractional splines is introduced, which allows for optimal approximation of real signals on a digital computer. Using splines, it can be shown that a linear operation on the spline can be equivalently represented by a matrix operating on the coefficients of a certain spline basis function. The form of the matrix, however, depends completely on the spline basis, and incorrect discretization of the operator into a matrix can lead to large errors in the resulting matrix/vector product.by Jonathan A. Kane.Ph.D

    ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер

    Get PDF
    For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений

    Dynamic Thermal Imaging for Intraoperative Monitoring of Neuronal Activity and Cortical Perfusion

    Get PDF
    Neurosurgery is a demanding medical discipline that requires a complex interplay of several neuroimaging techniques. This allows structural as well as functional information to be recovered and then visualized to the surgeon. In the case of tumor resections this approach allows more fine-grained differentiation of healthy and pathological tissue which positively influences the postoperative outcome as well as the patient's quality of life. In this work, we will discuss several approaches to establish thermal imaging as a novel neuroimaging technique to primarily visualize neural activity and perfusion state in case of ischaemic stroke. Both applications require novel methods for data-preprocessing, visualization, pattern recognition as well as regression analysis of intraoperative thermal imaging. Online multimodal integration of preoperative and intraoperative data is accomplished by a 2D-3D image registration and image fusion framework with an average accuracy of 2.46 mm. In navigated surgeries, the proposed framework generally provides all necessary tools to project intraoperative 2D imaging data onto preoperative 3D volumetric datasets like 3D MR or CT imaging. Additionally, a fast machine learning framework for the recognition of cortical NaCl rinsings will be discussed throughout this thesis. Hereby, the standardized quantification of tissue perfusion by means of an approximated heating model can be achieved. Classifying the parameters of these models yields a map of connected areas, for which we have shown that these areas correlate with the demarcation caused by an ischaemic stroke segmented in postoperative CT datasets. Finally, a semiparametric regression model has been developed for intraoperative neural activity monitoring of the somatosensory cortex by somatosensory evoked potentials. These results were correlated with neural activity of optical imaging. We found that thermal imaging yields comparable results, yet doesn't share the limitations of optical imaging. In this thesis we would like to emphasize that thermal imaging depicts a novel and valid tool for both intraoperative functional and structural neuroimaging

    Development of a Feasible Elastography Framework for Portable Ultrasound

    Get PDF
    Portable wireless ultrasound is emerging as a new ultrasound device due to the advantages such as small size, lightweight and affordable price. Its high portability allows practitioners to make diagnostic and therapeutic decisions in real-time without having to take the patients out of their environment. Recent portable ultrasound devices are equipped with sophisticated processors and image processing algorithms providing high image quality. Some of them are able to deliver multiple ultrasound modes including color Doppler, echocardiography, and endovaginal examination. Nevertheless, they are still lack of elastography functions due to the limitations in computational performance and data transfer speed via wireless communication. In order to implement the elastography function in the wireless portable ultrasound devices, this thesis proposes a new strain estimation method to significantly reduce the computation time and a compressive sensing framework to minimize the data transfer size. Firstly, a robust phase-based strain estimator (RPSE) is developed to overcome the limited hardware performance of portable ultrasound. The RPSE is not only computationally efficient but also robust to variations of the speed of sound, sampling frequency and pulse repetition. The RPSE has been compared with other representative strain estimators including time-delay, displacement-gradient, and conventional phase-based strain estimators (TSE, DSE and PSE, respectively). It has been shown that the RPSE is superior in several elastographic image quality measures, including signal-to-noise (SNRe) and contrast-to-noise (CNRe), and the computational efficiency. The study indicates that the RPSE method can deliver the acceptable level of elastography and fast computational speed for the ultrasound echo data sets from the numerical and experimental phantoms. According to the results from the numerical phantom experiment, RPSE can achieve highest values of SNRe and CNRe (around 5.22 and 47.62 dB) among all strain estimators tested, and almost 100 times higher computational efficiency than TSE and DSE (around 0.06 vs. 5.76 seconds per frame for RPSE and TSE, respectively). Secondly, as a means to reduce the large amount of ultrasound measurement data that has to be transmitted via wireless communication, the compressive sensing (CS) framework has been applied to elastography. The performance of CS is highly dependent on the selection of model basis to represent the sparse expansion as well as the reconstruction algorithm to recover the original data from the compressed signal. Therefore, it is essential to compose the optimal combination of model basis and reconstruction algorithm for CS framework to achieve the best CS performance in terms of image quality and the maximum data reduction. In this thesis, three model bases, discrete Fourier transform (FT), discrete cosine transform (DCT), and wave atoms (WA), along with two reconstruction algorithms, L1 minimization (L1) and Block sparse Bayesian learning (BSBL) are tested. Using B-mode and elastogram images of simulated numerical phantoms, the quality of CS reconstruction is assessed in terms of three image quality measures, mean absolute error (MAE), SNRe, and CNRe, at varying data reduction (subsampling) rates. The results illustrate that BSBL based CS frameworks can generally deliver much higher image quality and subsampling rate compared with L1-based ones. In particular, the CS frameworks adopting DCT and BSBL offer the best CS performance. The results also suggests that the maximum subsampling rates without causing image degradation are 40% for L1-based framework and 60% for BSBL-based framework, respectively. The contributions of this thesis help realize elastography functionality in portable ultrasound, thereby significantly expanding its utility. For example, the diagnosis of malignant lesions, even when a patient cannot be moved to hospital immediately, is possible with the portable ultrasound. Furthermore, the SPSE method and the CS framework can be individually employed for the conventional ultrasound device as well as other telemedicine applications, to enhance computational efficiency and image quality

    Geometric and photometric affine invariant image registration

    Get PDF
    This thesis aims to present a solution to the correspondence problem for the registration of wide-baseline images taken from uncalibrated cameras. We propose an affine invariant descriptor that combines the geometry and photometry of the scene to find correspondences between both views. The geometric affine invariant component of the descriptor is based on the affine arc-length metric, whereas the photometry is analysed by invariant colour moments. A graph structure represents the spatial distribution of the primitive features; i.e. nodes correspond to detected high-curvature points, whereas arcs represent connectivities by extracted contours. After matching, we refine the search for correspondences by using a maximum likelihood robust algorithm. We have evaluated the system over synthetic and real data. The method is endemic to propagation of errors introduced by approximations in the system.BAE SystemsSelex Sensors and Airborne System

    Adaptive Scattered Data Fitting with Tensor Product Spline-Wavelets

    Get PDF
    The core of the work we present here is an algorithm that constructs a least squares approximation to a given set of unorganized points. The approximation is expressed as a linear combination of particular B-spline wavelets. It implies a multiresolution setting which constructs a hierarchy of approximations to the data with increasing level of detail, proceeding from coarsest to finest scales. It allows for an efficient selection of the degrees of freedom of the problem and avoids the introduction of an artificial uniform grid. In fact, an analysis of the data can be done at each of the scales of the hierarchy, which can be used to select adaptively a set of wavelets that can represent economically the characteristics of the cloud of points in the next level of detail. The data adaption of our method is twofold, as it takes into account both horizontal distribution and vertical irregularities of data. This strategy can lead to a striking reduction of the problem complexity. Furthermore, among the possible ways to achieve a multiscale formulation, the wavelet approach shows additional advantages, based on good conditioning properties and level-wise orthogonality. We exploit these features to enhance the efficiency of iterative solution methods for the system of normal equations of the problem. The combination of multiresolution adaptivity with the numerical properties of the wavelet basis gives rise to an algorithm well suited to cope with problems requiring fast solution methods. We illustrate this by means of numerical experiments that compare the performance of the method on various data sets working with different multi-resolution bases. Afterwards, we use the equivalence relation between wavelets and Besov spaces to formulate the problem of data fitting with regularization. We find that the multiscale formulation allows for a flexible and efficient treatment of some aspects of this problem. Moreover, we study the problem known as robust fitting, in which the data is assumed to be corrupted by wrong measurements or outliers. We compare classical methods based on re-weighting of residuals to our setting in which the wavelet representation of the data computed by our algorithm is used to locate the outliers. As a final application that couples two of the main applications of wavelets (data analysis and operator equations), we propose the use of this least squares data fitting method to evaluate the non-linear term in the wavelet-Galerkin formulation of non-linear PDE problems. At the end of this thesis we discuss efficient implementation issues, with a special interest in the interplay between solution methods and data structures

    Glottal-synchronous speech processing

    No full text
    Glottal-synchronous speech processing is a field of speech science where the pseudoperiodicity of voiced speech is exploited. Traditionally, speech processing involves segmenting and processing short speech frames of predefined length; this may fail to exploit the inherent periodic structure of voiced speech which glottal-synchronous speech frames have the potential to harness. Glottal-synchronous frames are often derived from the glottal closure instants (GCIs) and glottal opening instants (GOIs). The SIGMA algorithm was developed for the detection of GCIs and GOIs from the Electroglottograph signal with a measured accuracy of up to 99.59%. For GCI and GOI detection from speech signals, the YAGA algorithm provides a measured accuracy of up to 99.84%. Multichannel speech-based approaches are shown to be more robust to reverberation than single-channel algorithms. The GCIs are applied to real-world applications including speech dereverberation, where SNR is improved by up to 5 dB, and to prosodic manipulation where the importance of voicing detection in glottal-synchronous algorithms is demonstrated by subjective testing. The GCIs are further exploited in a new area of data-driven speech modelling, providing new insights into speech production and a set of tools to aid deployment into real-world applications. The technique is shown to be applicable in areas of speech coding, identification and artificial bandwidth extension of telephone speec

    Analysis of the human corneal shape with machine learning

    Full text link
    Cette thèse cherche à examiner les conditions optimales dans lesquelles les surfaces cornéennes antérieures peuvent être efficacement pré-traitées, classifiées et prédites en utilisant des techniques de modélisation géométriques (MG) et d’apprentissage automatiques (AU). La première étude (Chapitre 2) examine les conditions dans lesquelles la modélisation géométrique peut être utilisée pour réduire la dimensionnalité des données utilisées dans un projet d’apprentissage automatique. Quatre modèles géométriques ont été testés pour leur précision et leur rapidité de traitement : deux modèles polynomiaux (P) – polynômes de Zernike (PZ) et harmoniques sphériques (PHS) – et deux modèles de fonctions rationnelles (R) : fonctions rationnelles de Zernike (RZ) et fonctions rationnelles d’harmoniques sphériques (RSH). Il est connu que les modèles PHS et RZ sont plus précis que les modèles PZ pour un même nombre de coefficients (J), mais on ignore si les modèles PHS performent mieux que les modèles RZ, et si, de manière plus générale, les modèles SH sont plus précis que les modèles R, ou l’inverse. Et prenant en compte leur temps de traitement, est-ce que les modèles les plus précis demeurent les plus avantageux? Considérant des valeurs de J (nombre de coefficients du modèle) relativement basses pour respecter les contraintes de dimensionnalité propres aux taches d’apprentissage automatique, nous avons établi que les modèles HS (PHS et RHS) étaient tous deux plus précis que les modèles Z correspondants (PZ et RR), et que l’avantage de précision conféré par les modèles HS était plus important que celui octroyé par les modèles R. Par ailleurs, les courbes de temps de traitement en fonction de J démontrent qu’alors que les modèles P sont traités en temps quasi-linéaires, les modèles R le sont en temps polynomiaux. Ainsi, le modèle SHR est le plus précis, mais aussi le plus lent (un problème qui peut en partie être remédié en appliquant une procédure de pré-optimisation). Le modèle ZP était de loin le plus rapide, et il demeure une option intéressante pour le développement de projets. SHP constitue le meilleur compromis entre la précision et la rapidité. La classification des cornées selon des paramètres cliniques a une longue tradition, mais la visualisation des effets moyens de ces paramètres sur la forme de la cornée par des cartes topographiques est plus récente. Dans la seconde étude (Chapitre 3), nous avons construit un atlas de cartes d’élévations moyennes pour différentes variables cliniques qui pourrait s’avérer utile pour l’évaluation et l’interprétation des données d’entrée (bases de données) et de sortie (prédictions, clusters, etc.) dans des tâches d’apprentissage automatique, entre autres. Une base de données constituée de plusieurs milliers de surfaces cornéennes antérieures normales enregistrées sous forme de matrices d’élévation de 101 by 101 points a d’abord été traitée par modélisation géométrique pour réduire sa dimensionnalité à un nombre de coefficients optimal dans une optique d’apprentissage automatique. Les surfaces ainsi modélisées ont été regroupées en fonction de variables cliniques de forme, de réfraction et de démographie. Puis, pour chaque groupe de chaque variable clinique, une surface moyenne a été calculée et représentée sous forme de carte d’élévations faisant référence à sa SMA (sphère la mieux ajustée). Après avoir validé la conformité de la base de donnée avec la littérature par des tests statistiques (ANOVA), l’atlas a été vérifié cliniquement en examinant si les transformations de formes cornéennes présentées dans les cartes pour chaque variable étaient conformes à la littérature. C’était le cas. Les applications possibles d’un tel atlas sont discutées. La troisième étude (Chapitre 4) traite de la classification non-supervisée (clustering) de surfaces cornéennes antérieures normales. Le clustering cornéen un domaine récent en ophtalmologie. La plupart des études font appel aux techniques d’extraction des caractéristiques pour réduire la dimensionnalité de la base de données cornéennes. Le but est généralement d’automatiser le processus de diagnostique cornéen, en particulier en ce qui a trait à la distinction entre les cornées normales et les cornées irrégulières (kératocones, Fuch, etc.), et dans certains cas, de distinguer différentes sous-classes de cornées irrégulières. L’étude de clustering proposée ici se concentre plutôt sur les cornées normales afin de mettre en relief leurs regroupements naturels. Elle a recours à la modélisation géométrique pour réduire la dimensionnalité de la base de données, utilisant des polynômes de Zernike, connus pour leur interprétativité transparente (chaque terme polynomial est associé à une caractéristique cornéenne particulière) et leur bonne précision pour les cornées normales. Des méthodes de différents types ont été testées lors de prétests (méthodes de clustering dur (hard) ou souple (soft), linéaires or non-linéaires. Ces méthodes ont été testées sur des surfaces modélisées naturelles (non-normalisées) ou normalisées avec ou sans traitement d’extraction de traits, à l’aide de différents outils d’évaluation (scores de séparabilité et d’homogénéité, représentations par cluster des coefficients de modélisation et des surfaces modélisées, comparaisons statistiques des clusters sur différents paramètres cliniques). Les résultats obtenus par la meilleure méthode identifiée, k-means sans extraction de traits, montrent que les clusters produits à partir de surfaces cornéennes naturelles se distinguent essentiellement en fonction de la courbure de la cornée, alors que ceux produits à partir de surfaces normalisées se distinguent en fonction de l’axe cornéen. La dernière étude présentée dans cette thèse (Chapitre 5) explore différentes techniques d’apprentissage automatique pour prédire la forme de la cornée à partir de données cliniques. La base de données cornéennes a d’abord été traitée par modélisation géométrique (polynômes de Zernike) pour réduire sa dimensionnalité à de courts vecteurs de 12 à 20 coefficients, une fourchette de valeurs potentiellement optimales pour effectuer de bonnes prédictions selon des prétests. Différentes méthodes de régression non-linéaires, tirées de la bibliothèque scikit-learn, ont été testées, incluant gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, et multi-layer perceptron. Les prédicteurs proviennent des variables cliniques disponibles dans la base de données, incluant des variables géométriques (diamètre horizontal de la cornée, profondeur de la chambre cornéenne, côté de l’œil), des variables de réfraction (cylindre, sphère et axe) et des variables démographiques (âge, genre). Un test de régression a été effectué pour chaque modèle de régression, défini comme la sélection d’une des 256 combinaisons possibles de variables cliniques (les prédicteurs), d’une méthode de régression, et d’un vecteur de coefficients de Zernike d’une certaine taille (entre 12 et 20 coefficients, les cibles). Tous les modèles de régression testés ont été évalués à l’aide de score de RMSE établissant la distance entre les surfaces cornéennes prédites (les prédictions) et vraies (les topographies corn¬éennes brutes). Les meilleurs d’entre eux ont été validés sur l’ensemble de données randomisé 20 fois pour déterminer avec plus de précision lequel d’entre eux est le plus performant. Il s’agit de gradient boosting utilisant toutes les variables cliniques comme prédicteurs et 16 coefficients de Zernike comme cibles. Les prédictions de ce modèle ont été évaluées qualitativement à l’aide d’un atlas de cartes d’élévations moyennes élaborées à partir des variables cliniques ayant servi de prédicteurs, qui permet de visualiser les transformations moyennes d’en groupe à l’autre pour chaque variables. Cet atlas a permis d’établir que les cornées prédites moyennes sont remarquablement similaires aux vraies cornées moyennes pour toutes les variables cliniques à l’étude.This thesis aims to investigate the best conditions in which the anterior corneal surface of normal corneas can be preprocessed, classified and predicted using geometric modeling (GM) and machine learning (ML) techniques. The focus is on the anterior corneal surface, which is the main responsible of the refractive power of the cornea. Dealing with preprocessing, the first study (Chapter 2) examines the conditions in which GM can best be applied to reduce the dimensionality of a dataset of corneal surfaces to be used in ML projects. Four types of geometric models of corneal shape were tested regarding their accuracy and processing time: two polynomial (P) models – Zernike polynomial (ZP) and spherical harmonic polynomial (SHP) models – and two corresponding rational function (R) models – Zernike rational function (ZR) and spherical harmonic rational function (SHR) models. SHP and ZR are both known to be more accurate than ZP as corneal shape models for the same number of coefficients, but which type of model is the most accurate between SHP and ZR? And is an SHR model, which is both an SH model and an R model, even more accurate? Also, does modeling accuracy comes at the cost of the processing time, an important issue for testing large datasets as required in ML projects? Focusing on low J values (number of model coefficients) to address these issues in consideration of dimensionality constraints that apply in ML tasks, it was found, based on a number of evaluation tools, that SH models were both more accurate than their Z counterparts, that R models were both more accurate than their P counterparts and that the SH advantage was more important than the R advantage. Processing time curves as a function of J showed that P models were processed in quasilinear time, R models in polynomial time, and that Z models were fastest than SH models. Therefore, while SHR was the most accurate geometric model, it was the slowest (a problem that can partly be remedied by applying a preoptimization procedure). ZP was the fastest model, and with normal corneas, it remains an interesting option for testing and development, especially for clustering tasks due to its transparent interpretability. The best compromise between accuracy and speed for ML preprocessing is SHP. The classification of corneal shapes with clinical parameters has a long tradition, but the visualization of their effects on the corneal shape with group maps (average elevation maps, standard deviation maps, average difference maps, etc.) is relatively recent. In the second study (Chapter 3), we constructed an atlas of average elevation maps for different clinical variables (including geometric, refraction and demographic variables) that can be instrumental in the evaluation of ML task inputs (datasets) and outputs (predictions, clusters, etc.). A large dataset of normal adult anterior corneal surface topographies recorded in the form of 101×101 elevation matrices was first preprocessed by geometric modeling to reduce the dimensionality of the dataset to a small number of Zernike coefficients found to be optimal for ML tasks. The modeled corneal surfaces of the dataset were then grouped in accordance with the clinical variables available in the dataset transformed into categorical variables. An average elevation map was constructed for each group of corneal surfaces of each clinical variable in their natural (non-normalized) state and in their normalized state by averaging their modeling coefficients to get an average surface and by representing this average surface in reference to the best-fit sphere in a topographic elevation map. To validate the atlas thus constructed in both its natural and normalized modalities, ANOVA tests were conducted for each clinical variable of the dataset to verify their statistical consistency with the literature before verifying whether the corneal shape transformations displayed in the maps were themselves visually consistent. This was the case. The possible uses of such an atlas are discussed. The third study (Chapter 4) is concerned with the use of a dataset of geometrically modeled corneal surfaces in an ML task of clustering. The unsupervised classification of corneal surfaces is recent in ophthalmology. Most of the few existing studies on corneal clustering resort to feature extraction (as opposed to geometric modeling) to achieve the dimensionality reduction of the dataset. The goal is usually to automate the process of corneal diagnosis, for instance by distinguishing irregular corneal surfaces (keratoconus, Fuch, etc.) from normal surfaces and, in some cases, by classifying irregular surfaces into subtypes. Complementary to these corneal clustering studies, the proposed study resorts mainly to geometric modeling to achieve dimensionality reduction and focuses on normal adult corneas in an attempt to identify their natural groupings, possibly in combination with feature extraction methods. Geometric modeling was based on Zernike polynomials, known for their interpretative transparency and sufficiently accurate for normal corneas. Different types of clustering methods were evaluated in pretests to identify the most effective at producing neatly delimitated clusters that are clearly interpretable. Their evaluation was based on clustering scores (to identify the best number of clusters), polar charts and scatter plots (to visualize the modeling coefficients involved in each cluster), average elevation maps and average profile cuts (to visualize the average corneal surface of each cluster), and statistical cluster comparisons on different clinical parameters (to validate the findings in reference to the clinical literature). K-means, applied to geometrically modeled surfaces without feature extraction, produced the best clusters, both for natural and normalized surfaces. While the clusters produced with natural corneal surfaces were based on the corneal curvature, those produced with normalized surfaces were based on the corneal axis. In each case, the best number of clusters was four. The importance of curvature and axis as grouping criteria in corneal data distribution is discussed. The fourth study presented in this thesis (Chapter 5) explores the ML paradigm to verify whether accurate predictions of normal corneal shapes can be made from clinical data, and how. The database of normal adult corneal surfaces was first preprocessed by geometric modeling to reduce its dimensionality into short vectors of 12 to 20 Zernike coefficients, found to be in the range of appropriate numbers to achieve optimal predictions. The nonlinear regression methods examined from the scikit-learn library were gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, and multilayer perceptron. The predictors were based on the clinical variables available in the database, including geometric variables (best-fit sphere radius, white-towhite diameter, anterior chamber depth, corneal side), refraction variables (sphere, cylinder, axis) and demographic variables (age, gender). Each possible combination of regression method, set of clinical variables (used as predictors) and number of Zernike coefficients (used as targets) defined a regression model in a prediction test. All the regression models were evaluated based on their mean RMSE score (establishing the distance between the predicted corneal surfaces and the raw topographic true surfaces). The best model identified was further qualitatively assessed based on an atlas of predicted and true average elevation maps by which the predicted surfaces could be visually compared to the true surfaces on each of the clinical variables used as predictors. It was found that the best regression model was gradient boosting using all available clinical variables as predictors and 16 Zernike coefficients as targets. The most explicative predictor was the best-fit sphere radius, followed by the side and refractive variables. The average elevation maps of the true anterior corneal surfaces and the predicted surfaces based on this model were remarkably similar for each clinical variable
    corecore