858 research outputs found

    A statistical approach to the inverse problem in magnetoencephalography

    Full text link
    Magnetoencephalography (MEG) is an imaging technique used to measure the magnetic field outside the human head produced by the electrical activity inside the brain. The MEG inverse problem, identifying the location of the electrical sources from the magnetic signal measurements, is ill-posed, that is, there are an infinite number of mathematically correct solutions. Common source localization methods assume the source does not vary with time and do not provide estimates of the variability of the fitted model. Here, we reformulate the MEG inverse problem by considering time-varying locations for the sources and their electrical moments and we model their time evolution using a state space model. Based on our predictive model, we investigate the inverse problem by finding the posterior source distribution given the multiple channels of observations at each time rather than fitting fixed source parameters. Our new model is more realistic than common models and allows us to estimate the variation of the strength, orientation and position. We propose two new Monte Carlo methods based on sequential importance sampling. Unlike the usual MCMC sampling scheme, our new methods work in this situation without needing to tune a high-dimensional transition kernel which has a very high cost. The dimensionality of the unknown parameters is extremely large and the size of the data is even larger. We use Parallel Virtual Machine (PVM) to speed up the computation.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS716 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    EEG Based Inference of Spatio-Temporal Brain Dynamics

    Get PDF

    Functional Brain Imaging by EEG: A Window to the Human Mind

    Get PDF

    Approximate inference in astronomy

    Get PDF
    This thesis utilizes the rules of probability theory and Bayesian reasoning to perform inference about astrophysical quantities from observational data, with a main focus on the inference of dynamical systems extended in space and time. The necessary assumptions to successfully solve such inference problems in practice are discussed and the resulting methods are applied to real world data. These assumptions range from the simplifying prior assumptions that enter the inference process up to the development of a novel approximation method for resulting posterior distributions. The prior models developed in this work follow a maximum entropy principle by solely constraining those physical properties of a system that appear most relevant to inference, while remaining uninformative regarding all other properties. To this end, prior models that only constrain the statistically homogeneous space-time correlation structure of a physical observable are developed. The constraints placed on these correlations are based on generic physical principles, which makes the resulting models quite flexible and allows for a wide range of applications. This flexibility is verified and explored using multiple numerical examples, as well as an application to data provided by the Event Horizon Telescope about the center of the galaxy M87. Furthermore, as an advanced and extended form of application, a variant of these priors is utilized within the context of simulating partial differential equations. Here, the prior is used in order to quantify the physical plausibility of an associated numerical solution, which in turn improves the accuracy of the simulation. The applicability and implications of this probabilistic approach to simulation are discussed and studied using numerical examples. Finally, utilizing such prior models paired with the vast amount of observational data provided by modern telescopes, results in Bayesian inference problems that are typically too complex to be fully solvable analytically. Specifically, most resulting posterior probability distributions become too complex, and therefore require a numerical approximation via a simplified distribution. To improve upon existing methods, this work proposes a novel approximation method for posterior probability distributions: the geometric Variational Inference (geoVI) method. The approximation capacities of geoVI are theoretically established and demonstrated using numerous numerical examples. These results suggest a broad range of applicability as the method provides a decrease in approximation errors compared to state of the art methods at a moderate level of computational costs.Diese Dissertation verwendet die Regeln der Wahrscheinlichkeitstheorie und Bayes’scher Logik, um astrophysikalische Größen aus Beobachtungsdaten zu rekonstruieren, mit einem Schwerpunkt auf der Rekonstruktion von dynamischen Systemen, die in Raum und Zeit definiert sind. Es werden die Annahmen, die notwendig sind um solche Inferenz-Probleme in der Praxis erfolgreich zu lösen, diskutiert, und die resultierenden Methoden auf reale Daten angewendet. Diese Annahmen reichen von vereinfachenden Prior-Annahmen, die in den Inferenzprozess eingehen, bis hin zur Entwicklung eines neuartigen Approximationsverfahrens für resultierende Posterior-Verteilungen. Die in dieser Arbeit entwickelten Prior-Modelle folgen einem Prinzip der maximalen Entropie, indem sie nur die physikalischen Eigenschaften eines Systems einschränken, die für die Inferenz am relevantesten erscheinen, während sie bezüglich aller anderen Eigenschaften agnostisch bleiben. Zu diesem Zweck werden Prior-Modelle entwickelt, die nur die statistisch homogene Raum-Zeit-Korrelationsstruktur einer physikalischen Observablen einschränken. Die gewählten Bedingungen an diese Korrelationen basieren auf generischen physikalischen Prinzipien, was die resultierenden Modelle sehr flexibel macht und ein breites Anwendungsspektrum ermöglicht. Dies wird anhand mehrerer numerischer Beispiele sowie einer Anwendung auf Daten des Event Horizon Telescope über das Zentrum der Galaxie M87 verifiziert und erforscht. Darüber hinaus wird als erweiterte Anwendungsform eine Variante dieser Modelle zur Simulation partieller Differentialgleichungen verwendet. Hier wird der Prior als Vorwissen benutzt, um die physikalische Plausibilität einer zugehörigen numerischen Lösung zu quantifizieren, was wiederum die Genauigkeit der Simulation verbessert. Die Anwendbarkeit und Implikationen dieses probabilistischen Simulationsansatzes werden diskutiert und anhand von numerischen Beispielen untersucht. Die Verwendung solcher Prior-Modelle, gepaart mit der riesigen Menge an Beobachtungsdaten moderner Teleskope, führt typischerweise zu Inferenzproblemen die zu komplex sind um vollständig analytisch lösbar zu sein. Insbesondere ist für die meisten resultierenden Posterior-Wahrscheinlichkeitsverteilungen eine numerische Näherung durch eine vereinfachte Verteilung notwendig. Um bestehende Methoden zu verbessern, schlägt diese Arbeit eine neuartige Näherungsmethode für Wahrscheinlichkeitsverteilungen vor: Geometric Variational Inference (geoVI). Die Approximationsfähigkeiten von geoVI werden theoretisch ermittelt und anhand numerischer Beispiele demonstriert. Diese Ergebnisse legen einen breiten Anwendungsbereich nahe, da das Verfahren bei moderaten Rechenkosten eine Verringerung des Näherungsfehlers im Vergleich zum Stand der Technik liefert

    Advanced methods for earth observation data synergy for geophysical parameter retrieval

    Get PDF
    The first part of the thesis focuses on the analysis of relevant factors to estimate the response time between satellite-based and in-situ soil moisture (SM) using a Dynamic Time Warping (DTW). DTW was applied to the SMOS L4 SM, and was compared to in-situ root-zone SM in the REMEDHUS network in Western Spain. The method was customized to control the evolution of time lag during wetting and drying conditions. Climate factors in combination with crop growing seasons were studied to reveal SM-related processes. The heterogeneity of land use was analyzed using high-resolution images of NDVI from Sentinel-2 to provide information about the level of spatial representativity of SMOS data to each in-situ station. The comparison of long-term precipitation records and potential evapotranspiration allowed estimation of SM seasons describing different SM conditions depending on climate and soil properties. The second part of the thesis focuses on data-driven methods for sea ice segmentation and parameter retrieval. A Bayesian framework is employed to segment sets of multi-source satellite data. The Bayesian unsupervised learning algorithm allows to investigate the ‘hidden link’ between multiple data. The statistical properties are accounted for by a Gaussian Mixture Model, and the spatial interactions are reflected using Hidden Markov Random Fields. The algorithm segments spatial data into a number of classes, which are represented as a latent field in physical space and as clusters in feature space. In a first application, a two-step probabilistic approach based on Expectation-Maximization and the Bayesian segmentation algorithm was used to segment SAR images to discriminate surface water from sea ice types. Information on surface roughness is contained in the radar backscattering images which can be - in principle - used to detect melt ponds and to estimate high-resolution sea ice concentration (SIC). In a second study, the algorithm was applied to multi-incidence angle TB data from the SMOS L1C product to harness the its sensitivity to thin ice. The spatial patterns clearly discriminate well-determined areas of open water, old sea ice and a transition zone, which is sensitive to thin sea ice thickness (SIT) and SIC. In a third application, SMOS and the AMSR2 data are used to examine the joint effect of CIMR-like observations. The information contained in the low-frequency channels allows to reveal ranges of thin sea ice, and thicker ice can be determined from the relationship between the high-frequency channels and changing conditions as the sea ice ages. The proposed approach is suitable for merging large data sets and provides metrics for class analysis, and to make informed choices about integrating data from future missions into sea ice products. A regression neural network approach was investigated with the goal to infer SIT using TB data from the Flexible Microwave Payload 2 (FMPL-2) of the FSSCat mission. Two models - covering thin ice up to 0.6m and the full-range of SIT - were trained on Arctic data using ground truth data derived from the SMOS and Cryosat-2. This work demonstrates that moderate-cost CubeSat missions can provide valuable data for applications in Earth observation.La primera parte de la tesis se centra en el análisis de los factores relevantes para estimar el tiempo de respuesta entre la humedad del suelo (SM) basada en el satélite y la in-situ, utilizando una deformación temporal dinámica (DTW). El DTW se aplicó al SMOS L4 SM, y se comparó con la SM in-situ en la red REMEDHUS en el oeste de España. El método se adaptó para controlar la evolución del desfase temporal durante diferentes condiciones de humedad y secado. Se estudiaron los factores climáticos en combinación con los períodos de crecimiento de los cultivos para revelar los procesos relacionados con la SM. La heterogeneidad del uso del suelo se analizó utilizando imágenes de alta resolución de NDVI de Sentinel-2 para proporcionar información sobre el nivel de representatividad espacial de los datos de SMOS a cada estación in situ. La comparación de los patrones de precipitación a largo plazo y la evapotranspiración potencial permitió estimar las estaciones de SM que describen diferentes condiciones de SM en función del clima y las propiedades del suelo. La segunda parte de esta tesis se centra en métodos dirigidos por datos para la segmentación del hielo marino y la obtención de parámetros. Se emplea un método de inferencia bayesiano para segmentar conjuntos de datos satelitales de múltiples fuentes. El algoritmo de aprendizaje bayesiano no supervisado permite investigar el “vínculo oculto” entre múltiples datos. Las propiedades estadísticas se contabilizan mediante un modelo de mezcla gaussiana, y las interacciones espaciales se reflejan mediante campos aleatorios ocultos de Markov. El algoritmo segmenta los datos espaciales en una serie de clases, que se representan como un campo latente en el espacio físico y como clústeres en el espacio de las variables. En una primera aplicación, se utilizó un enfoque probabilístico de dos pasos basado en la maximización de expectativas y el algoritmo de segmentación bayesiano para segmentar imágenes SAR con el objetivo de discriminar el agua superficial de los tipos de hielo marino. La información sobre la rugosidad de la superficie está contenida en las imágenes de backscattering del radar, que puede utilizarse -en principio- para detectar estanques de deshielo y estimar la concentración de hielo marino (SIC) de alta resolución. En un segundo estudio, el algoritmo se aplicó a los datos TB de múltiples ángulos de incidencia del producto SMOS L1C para aprovechar su sensibilidad al hielo fino. Los patrones espaciales discriminan claramente áreas bien determinadas de aguas abiertas, hielo marino viejo y una zona de transición, que es sensible al espesor del hielo marino fino (SIT) y al SIC. En una tercera aplicación, se utilizan los datos de SMOS y de AMSR2 para examinar el efecto conjunto de las observaciones tipo CIMR. La información contenida en los canales de baja frecuencia permite revelar rangos de hielo marino delgado, y el hielo más grueso puede determinarse a partir de la relación entre los canales de alta frecuencia y las condiciones cambiantes a medida que el hielo marino envejece. El enfoque propuesto es adecuado para fusionar grandes conjuntos de datos y proporciona métricas para el análisis de clases, y para tomar decisiones informadas sobre la integración de datos de futuras misiones en los productos de hielo marino. Se investigó un enfoque de red neuronal de regresión con el objetivo de inferir el SIT utilizando datos de TB de la carga útil de microondas flexible 2 (FMPL-2) de la misión FSSCat. Se entrenaron dos modelos - que cubren el hielo fino hasta 0.6 m y el rango completo del SIT - con datos del Ártico utilizando datos de “ground truth” derivados del SMOS y del Cryosat-2. Este trabajo demuestra que las misiones CubeSat de coste moderado pueden proporcionar datos valiosos para aplicaciones de observación de la Tierra.Postprint (published version

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Advanced Image Acquisition, Processing Techniques and Applications

    Get PDF
    "Advanced Image Acquisition, Processing Techniques and Applications" is the first book of a series that provides image processing principles and practical software implementation on a broad range of applications. The book integrates material from leading researchers on Applied Digital Image Acquisition and Processing. An important feature of the book is its emphasis on software tools and scientific computing in order to enhance results and arrive at problem solution
    corecore