25 research outputs found

    Models and methods for computationally efficient analysis of large spatial and spatio-temporal data

    Get PDF
    With the development of technology, massive amounts of data are often observed at a large number of spatial locations (n). However, statistical analysis is usually not feasible or not computationally efficient for such large dataset. This is the so-called big n problem . The goal of this dissertation is to contribute solutions to the big n problem . The dissertation is devoted to computationally efficient methods and models for large spatial and spatio-temporal data. Several approximation methods to the big n problem are reviewed, and an extended autoregressive model, called the EAR model, is proposed as a parsimonious model that accounts for smoothness of a process collected over space. It is an extension of the Pettitt et a1. as well as Czado and Prokopenko parameterizations of the spatial conditional autoregressive (CAR) model. To complement the computational advantage, a structure removing orthonormal transformation named pre-whitening is described. This transformation is based on a singular value decomposition and results in the removal of spatial structure from the data. Circulant embedding technique further simplifies the calculation of eigenvalues and eigenvectors for the pre-whitening procedure. The EAR model is studied to have connections to the Matern class covariance structure in geostatistics as well as the integrated nested Laplace approximation (INLA) approach that is based on a stochastic partial differential equation (SPDE) framework. To model geostatistical data, a latent spatial Gaussian Markov random field (GMRF) with an EAR model prior is applied. The GMRF is defined on a fine grid and thus enables the posterior precision matrix to be diagonal through introducing a missing data scheme. This results in parameter estimation and spatial interpolation simultaneously under the Bayesian Markov chain Monte Carlo (MCMC) framework. The EAR model is naturally extended to spatio-temporal models. In particular, a spatio-temporal model with spatially varying temporal trend parameters is discussed

    Vision-based navigation for autonomous underwater vehicles

    Get PDF
    This thesis investigates the use of vision sensors in Autonomous Underwater Vehicle (AUV) navigation, which is typically performed using a combination of dead-reckoning and external acoustic positioning systems. Traditional dead-reckoning sensors such els Doppler Velocity Logs (DVLs) or inertial systems are expensive and result in drifting trajectory estimates. Acoustic positioning systems can be used to correct dead-reckoning drift, however they are time consuming to deploy and have a limited range of operation. Occlusion and multipath problems may also occur when a vehicle operates near the seafloor, particularly in environments such as reefs, ridges and canyons, which are the focus of many AUV applications. Vision-based navigation approaches have the potential to improve the availability and performance of AUVs in a wide range of applications. Visual odometry may replace expensive dead-reckoning sensors in small and low-cost vehicles. Using onboard cameras to correct dead-reckoning drift will allow AUVs to navigate accurately over long distances, without the limitations of acoustic positioning systems. This thesis contains three principal contributions. The first is an algorithm to estimate the trajectory of a vehicle by fusing observations from sonar and monocular vision sensors. The second is a stereo-vision motion estimation approach that can be used on its own to provide odometry estimation, or fused with additional sensors in a Simultaneous Localisation And Mapping (SLAM) framework. The third is an efficient SLAM algorithm that uses visual observations to correct drifting trajectory estimates. Results of this work are presented in simulation and using data collected during several deployments of underwater vehicles in coral reef environments. Trajectory estimation is demonstrated for short transects using the sonar and vision fusion and stereo-vision approaches. Navigation over several kilometres is demonstrated using the SLAM algorithm, where stereo-vision is shown to improve the estimated trajectory produced by a DVL

    The Role of Computers in Research and Development at Langley Research Center

    Get PDF
    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics

    Biclustering: Methods, Software and Application

    Get PDF
    Over the past 10 years, biclustering has become popular not only in the field of biological data analysis but also in other applications with high-dimensional two way datasets. This technique clusters both rows and columns simultaneously, as opposed to clustering only rows or only columns. Biclustering retrieves subgroups of objects that are similar in one subgroup of variables and different in the remaining variables. This dissertation focuses on improving and advancing biclustering methods. Since most existing methods are extremely sensitive to variations in parameters and data, we developed an ensemble method to overcome these limitations. It is possible to retrieve more stable and reliable bicluster in two ways: either by running algorithms with different parameter settings or by running them on sub- or bootstrap samples of the data and combining the results. To this end, we designed a software package containing a collection of bicluster algorithms for different clustering tasks and data scales, developed several new ways of visualizing bicluster solutions, and adapted traditional cluster validation indices (e.g. Jaccard index) for validating the bicluster framework. Finally, we applied biclustering to marketing data. Well-established algorithms were adjusted to slightly different data situations, and a new method specially adapted to ordinal data was developed. In order to test this method on artificial data, we generated correlated original random values. This dissertation introduces two methods for generating such values given a probability vector and a correlation structure. All the methods outlined in this dissertation are freely available in the R packages biclust and orddata. Numerous examples in this work illustrate how to use the methods and software.In den letzten 10 Jahren wurde das Biclustern vor allem auf dem Gebiet der biologischen Datenanalyse, jedoch auch in allen Bereichen mit hochdimensionalen Daten immer populärer. Unter Biclustering versteht man das simultane Clustern von 2-Wege-Daten, um Teilmengen von Objekten zu finden, die sich zu Teilmengen von Variablen ähnlich verhalten. Diese Arbeit beschäftigt sich mit der Weiterentwicklung und Optimierung von Biclusterverfahren. Neben der Entwicklung eines Softwarepaketes zur Berechnung, Aufarbeitung und graphischen Darstellung von Bicluster Ergebnissen wurde eine Ensemble Methode für Bicluster Algorithmen entwickelt. Da die meisten Algorithmen sehr anfällig auf kleine Veränderungen der Startparameter sind, können so robustere Ergebnisse erzielt werden. Die neue Methode schließt auch das Zusammenfügen von Bicluster Ergebnissen auf Subsample- und Bootstrap-Stichproben mit ein. Zur Validierung der Ergebnisse wurden auch bestehende Maße des traditionellen Clusterings (z.B. Jaccard Index) für das Biclustering adaptiert und neue graphische Mittel für die Interpretation der Ergebnisse entwickelt. Ein weiterer Teil der Arbeit beschäftigt sich mit der Anwendung von Bicluster Algorithmen auf Daten aus dem Marketing Bereich. Dazu mussten bestehende Algorithmen verändert und auch ein neuer Algorithmus speziell für ordinale Daten entwickelt werden. Um das Testen dieser Methoden auf künstlichen Daten zu ermöglichen, beinhaltet die Arbeit auch die Ausarbeitung eines Verfahrens zur Ziehung ordinaler Zufallszahlen mit vorgegebenen Wahrscheinlichkeiten und Korrelationsstruktur. Die in der Arbeit vorgestellten Methoden stehen durch die beiden R Pakete biclust und orddata allgemein zur Verfügung. Die Nutzbarkeit wird in der Arbeit durch zahlreiche Beispiele aufgezeigt

    Topographic maps of semantic space

    Get PDF

    Optimising the NAOMI adaptive optics real-time control system

    Get PDF
    This thesis describes the author's research in the field of Real-Time Control (RTC) for Adaptive Optics (AO) instrumentation. The research encompasses experiences and knowledge gained working in the area of RTC on astronomical instrumentation projects whilst at the Optical Science Laboratories (OSL), University College London (UCL), the Isaac Newton Groups of Telescopes (ING) and the Centre for Advanced Instrumentation (СfAI), Durham University. It begins by providing an extensive introduction to the field of Astronomical Adaptive Optics covering Image Correction Theory, Atmospheric Theory, Control Theory and Adaptive Optics Component Theory. The following chapter contains a review of the current state of world wide AO instruments and facilities. The Nasmyth Adaptive Optics Multi-purpose Instrument (NAOMI), the common user AO facility at the 4.2 William Herschel Telescope (WHT), is subsequently described. Results of NAOMI component characterisation experiments are detailed to provide a system understanding of the improvement optimisation could offer. The final chapter investigates how upgrading the RTCS could increase NAOMI'S spatial and temporal performance and examines the RTCS in the context of Extremely Large Telescope (ELT) class telescopes

    Modelling, simulation and control of photovoltaic converter systems

    Get PDF
    The thesis follows the development of an advanced solar photovoltaic power conversion system from first principles. It is divided into five parts. The first section shows the development of a circuit-based simulation model of a photovoltaic (PV) cell within the 'SABER' simulator environment. Although simulation models for photovoltaic cells are available these are usually application specific, mathematically intensive and not suited to the development of power electronics. The model derived within the thesis is a circuit-based model that makes use of a series of current/voltage data sets taken from an actual cell in order to define the relationships between the cell double-exponential model parameters and the environmental parameters of temperature and irradiance. Resulting expressions define a 'black box' model, and the power electronics designer may simply specify values of temperature and irradiance to the model, and the simulated electrical connections to the cell provide the appropriate I/V characteristic. The second section deals with the development of a simulation model of an advanced PVaware DC-DC converter system. This differs from the conventional in that by using an embedded maximum power tracking system within a conventional linear feedback control arrangement it addresses the problem of loads which may not require the level of power available at the maximum power point, but is also able to drive loads which consistently require a maximum power feed such as a grid-coupled inverter. The third section details a low-power implementation of the above system in hardware. This shows the viability of the new, fast embedded maximum power tracking system and also the advantages of the system in terms of speed and response time over conventional systems. The fourth section builds upon the simulation model developed in the second section by adding an inverter allowing AC loads (including a utility) to be driven. The complete system is simulated and a set of results obtained showing that the system is a usable one. The final section describes the construction and analysis of a complete system in hardware (c. 500W) and identifies the suitability of the system to appropriate applications

    New algorithm for efficient prediction of Casimir interactions among arbitrary materials in arbitrary geometries

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 161-163).For most of its 60 year history, the Casimir effect was an obscure theoretical backwater, but technological advances over the past decade have promoted this curious manifestation of quantum and thermal fluctuations to a position of central importance in modern experimental physics. Dramatic progress in the measurement of Casimir forces since 1997 has created a demand for theoretical tools that can predict Casimir interactions in realistic experimental geometries and in materials with realistic frequency-dependent electrical properties. This work presents a new paradigm for efficient numerical computation of Casimir interactions. Our new technique, which we term the fluctuating-surface-current (FSC) approach to computational Casimir physics, borrows ideas from the boundary-element method of computational electromagnetism to express Casimir energies, forces, and torques between bodies of arbitrary shapes and materials in terms of interactions among effective electric and magnetic surface currents flowing on the surfaces of the objects. We demonstrate that the master equations of the FSC approach arise as logical consequences of either of two seemingly disparate Casimir paradigms-the stress-tensor approach and the path-integral (or scattering) approach-and this work thus achieves an unexpected unification of these two otherwise quite distinct theoretical frameworks. But a theoretical technique is only as relevant as its practical implementations are useful, and for this reason we present three distinct numerical implementations of the FSC formulae, each of which poses a series of unique technical challenges. Finally, using our new theoretical paradigm and our practical implementations of it, we obtain new predictions of Casimir interactions in a number of experimentally relevant geometric and material configurations that would be difficult or impossible to treat with any other existing Casimir method.by M. T. Homer Reid.Ph.D

    Spatio-temporal statistical models for glaciology

    Get PDF
    The purpose of this thesis is to develop spatio-temporal statistical models for glaciology, using the Bayesian hierarchical framework. Specifically, the process level is modeled as a time series of computer simulator outputs (i.e., from a numerical partial differential equation solver or an emulator) added to an error-correcting statistical process, closely related to the concept of model discrepancy. This error-correcting process accounts for spatial variability in simulator inaccuracies, as well as the accumulation of simulator inaccuracies forward in time. For computational efficiency, linear algebra for bandwidth-limited matrices is used for evaluating the likelihood of the model, and first-order emulator inference allows for the fast approximation of numerical solvers. Additionally, a computationally efficient approximation for the likelihood is derived. Analytical solutions to the shallow ice approximation (SIA) of the full Stokes equation system for stress balance of ice are used to examine the speed and accuracy of the computational methods used, in addition to the validity of modeling assumptions. Moreover, the modeling and methodology within this thesis are tested on data sets collected by the University of Iceland Institute of Earth Science (UI-IES) glaciology team, including bi-yearly mass balance measurements at 25 fixed sites at Langjökull (a glacier) over 19 years, in addition to 100 meter resolution digital elevation maps. As a byproduct of the construction of the Bayesian hierarchical model, a novel finite difference method is derived for solving the SIA partial differential equation (PDE). Although the application domain of this work is glaciology, the model and methods developed in this thesis can be applied to other geophysical domains. The thesis is structured around three papers. The first of these papers reviews dynamical modeling of glacial flow, introduces a second-order finite difference method for solving the SIA PDE, presents a Bayesian hierarchical model involving this numerical solver, and validates the model with analytical solutions to the SIA PDE. The second of these papers generalizes the statistical model of the first paper, probes higher- order random walks for representing model discrepancy, incorporates first-order emulators, and analyzes methods for efficient log-likelihood evaluation. The third of these papers applies the model framework of the first two papers to mass balance and surface elevation data at Langjökull. The major contributions of the thesis are the derivation of a new numerical method for solving the SIA PDE in two spatial dimensions and time, the use of a random walk to represent model discrepancy (i.e., an error-correcting process), efficient methods for log-likelihood evaluation, and the application of spatio-temporal statistical modeling to Langjökull, one of Iceland’s main glaciers.Markmið þessarar ritgerðar er að þróa tíma- og rýmisháð tölfræðileg líkön fyrir jökla með því að nota stigskipt Bayesísk líkön. Hér er sá hluti stigskipta Bayesíska líkansins sem snýr að undirliggjandi líkani fyrir ferlið sem er verið að skoða, útfærður þannig að tímaraðirnar sem koma frá tölulegum hermi (þ.e. frá tölulegri lausn á hlutafleiðujöfnu eða nálgun á slíkri tölulegri lausn) eru lagðar saman við líkindafræðilegt ferli sem hefur það hlutverk að leiðrétta fyrir mismuninn á milli tölulega hermisins og raunverulega ferlisins, og er nátengd hugmyndinni um misræmi líkana. Þetta líkindafræðilega ferli leiðréttir fyrir rýmisháð frávik og tekur tillit til að frávikin safnist upp yfir tíma. Til að flýta fyrir útreikningum þá er línuleg algebra fyrir rýr fylki notuð til að reikna sennileikafall líkansins og fyrstu gráðu hermar notaðir til að flýta fyrir útreikningum á tölulegum lausnum hlutafleiðujafna eða öðrum kerfum. Að auki er ný reiknisparandi nálgun fundin fyrir sennileikafallið. Fræðilegar lausnir á þunnjökla nálguninni sem byggir á jöfnum Stokes fyrir spennur í jöklum, eru notaðar til að meta reiknihraða og nákvæmni tölulegra nálgana, og hversu vel líkanið fellur að gögnunum. Að auki er líkönunum og aðferðafræðinni í þessari ritgerð beitt á raunveruleg gagnasöfn sem Jarðvísindastofnun Háskóla Íslands hefur sett saman, þar með taldar afkomumælingar á 22-25 föstum stöðum sem teknar eru tvisvar á ári á Langjökli yfir 19 ára tímabil, auk hæðarkorts sem hefur 100 metra upplausn. Aukaafurð sem kom til við smíði á stigskipta Bayesíska líkaninu, er ný mismunaaðferð til að leysa tölulega hlutafleiðujöfnuna fyrir þunnjökla nálgunina. Þó svo að aðferðirnar sem hér eru settar fram séu fyrir jöklafræði þá má útfæra þær fyrir önnur jarðeðlisfræðileg gögn og tilsvarandi líkön. Ritgerðin byggir á þremur vísindagreinum. Í fyrstu greininni er farið yfir þau líkön sem hafa verið þróuð til að lýsa hreyfingu jökla, annarar gráðu mismunaaðferð til að leysa tölulegu hlutafleiðujöfnuna fyrir þunnjökla nálgunina er kynnt sem og stigskipt Bayesískt líkan sem notar tölulegu lausnina, og mat byggt á stigskipta Bayesíska líkaninu er borið saman við fræðilega lausn þunnjökla nálgunarinnar. Í grein tvö er fjallað um nánari útfærslu á tölfræðinni og útreikningunum fyrir stigskipta Bayesíska líkanið, slembigangur af stigi hærra en einn er skoðaður sem líkindafræðilegt líkan fyrir mismuninn á milli tölulega hermisins og raunveruleikans, sýnd notkun á fyrstu gráðu hermum, og ný reiknisparandi nálgun fyrir sennileikafallið er kynnt. Í þriðju greininni er aðferðafræði fyrstu tveggja greinanna beitt á afkomugögn og hæðargögn frá Langjökli. Framlag ritgerðarinnar felst í: (i) nýrri annarrar gráðu mismunaaðferð til að leysa tölulegu hlutafleiðujöfnuna fyrir þunnjökla nálgunina í tveimur rúmvíddum og tíma, (ii) notkun slembigangs til að lýsa mismuninum á milli tölulega hermisins og raunverulega ferlisins, (iii) nýrri reiknisparandi nálgun fyrir sennileikafallið, (iv) að beita nýju tíma- og rýmisháðu tölfræðilegu líkani fyrir jökla við greiningu gagna frá Langjökli, einum af stærstu jöklum Íslands.Icelandic Research Fun

    Tomography applied to Lamb wave contact scanning nondestructive evaluation

    Get PDF
    The aging world-wide aviation fleet requires methods for accurately predicting the presence of structural flaws that compromise airworthiness in aircraft structures. Nondestructive Evaluation (NDE) provides the means to assess these structures quickly, quantitatively, and noninvasively. Ultrasonic guided waves, Lamb waves, are useful for evaluating the plate and shell structures common in aerospace applications. The amplitude and time-of-flight of Lamb waves depend on the material properties and thickness of a medium, and so they can be used to detect any areas of differing thickness or material properties which indicate flaws. By scanning sending and receiving transducers over an aircraft, large sections can be evaluated after a single pass. However, while this technique enables the detection of areas of structural deterioration, it does not allow for the quantification of the extent of that deterioration. Tomographic reconstruction with Lamb waves allows for the accurate reconstruction of the variation of quantities of interest, such as thickness, throughout the investigated region, and it presents the data as a quantitative map. The location, shape, and extent of any flaw region can then be easily extracted from this Tomographic image. Two Lamb wave tomography techniques using Parallel Projection tomography (PPT) and Cross Borehole tomography (CBT), are shown to accurately reconstruct flaws of interest to the aircraft industry. A comparison of the quality of reconstruction and practicality is then made between these two methods, and their limitations are discussed and shown experimentally. Higher order plate theory is used to derive analytical solutions for the scattering of the lowest order symmetric Lamb wave from a circular inclusion, and these solutions are used to explain the scattering effects seen in the Tomographic reconstructions. Finally, the means by which this scattering theory can be used to develop Lamb wave Tomographic algorithms that are more generally applicable in-the-field, is presented
    corecore