60 research outputs found

    EEG Based Inference of Spatio-Temporal Brain Dynamics

    Get PDF

    Data Assimilation in high resolution Numerical Weather Prediction models to improve forecast skill of extreme hydrometeorological events.

    Get PDF
    The complex orography typical of the Mediterranean area supports the formation, mainly during the fall season, of the so-called back-building Mesoscale Convective Systems (MCS) producing torrential rainfall often resulting into flash floods. These events are hardly predictable from a hydrometeorological standpoint and may cause significant amount of fatalities and socio-economic damages. Liguria region is characterized by small catchments with very short hydrological response time, and it has been proven to be very exposed to back-building MCSs occurrence. Indeed this region between 2011 and 2014 has been hit by three intense back-building MCSs causing a total death toll of 20 people and several hundred million of euros of damages. Building on the existing relationship between significant lightning activity and deep convection and precipitation, the first part of this work assesses the performance of the Lightning Potential Index, as a measure of the potential for charge generation and separation that leads to lightning occurrence in clouds, for the back-building Mesoscale Convective System which hit Genoa city (Italy) in 2014. An ensemble of Weather Research and Forecasting simulations at cloud-permitting grid spacing (1 km) with different microphysical parameterizations is performed and compared to the available observational radar and lightning data. The results allow gaining a deeper understanding of the role of lightning phenomena in the predictability of back-building Mesoscale Convective Systems often producing flash flood over western Mediterranean complex topography areas. Despite these positive and promising outcomes for the understanding highly-impacting MCS, the main forecasting issue, namely the uncertainty in the correct reproduction of the convective field (location, timing, and intensity) for this kind of events still remains open. Thus, the second part of the work assesses the predictive capability, for a set of back-building Liguria MCS episodes (including Genoa 2014), of a hydro-meteorological forecasting chain composed by a km-scale cloud resolving WRF model, including a 6 hour cycling 3DVAR assimilation of radar reflectivity and conventional ground sensors data, by the Rainfall Filtered Autoregressive Model (RainFARM) and the fully distributed hydrological model Continuum. A rich portfolio of WRF 3DVAR direct and indirect reflectivity operators, has been explored to drive the meteorological component of the proposed forecasting chain. The results confirm the importance of rapidly refreshing and data intensive 3DVAR for improving first quantitative precipitation forecast, and, subsequently flash-floods occurrence prediction in case of back-building MCSs events. The third part of this work devoted the improvement of severe hydrometeorological events prediction has been undertaken in the framework of the European Space Agency (ESA) STEAM (SaTellite Earth observation for Atmospheric Modelling) project aiming at investigating, new areas of synergy between high-resolution numerical atmosphere models and data from spaceborne remote sensing sensors, with focus on Copernicus Sentinels 1, 2 and 3 satellites and Global Positioning System stations. In this context, the Copernicus Sentinel satellites represent an important source of data, because they provide a set of high-resolution observations of physical variables (e.g. soil moisture, land/sea surface temperature, wind speed, columnar water vapor) to be used in NWP models runs operated at cloud resolving grid spacing . For this project two different use cases are analyzed: the Livorno flash flood of 9 Sept 2017, with a death tool of 9 people, and the Silvi Marina flood of 15 November 2017. Overall the results show an improvement of the forecast accuracy by assimilating the Sentinel-1 derived wind and soil moisture products as well as the Zenith Total Delay assimilation both from GPS stations and SAR Interferometry technique applied to Sentinel-1 data

    Interpretable Machine Learning for Electro-encephalography

    Get PDF
    While behavioral, genetic and psychological markers can provide important information about brain health, research in that area over the last decades has much focused on imaging devices such as magnetic resonance tomography (MRI) to provide non-invasive information about cognitive processes. Unfortunately, MRI based approaches, able to capture the slow changes in blood oxygenation levels, cannot capture electrical brain activity which plays out on a time scale up to three orders of magnitude faster. Electroencephalography (EEG), which has been available in clinical settings for over 60 years, is able to measure brain activity based on rapidly changing electrical potentials measured non-invasively on the scalp. Compared to MRI based research into neurodegeneration, EEG based research has, over the last decade, received much less interest from the machine learning community. But generally, EEG in combination with sophisticated machine learning offers great potential such that neglecting this source of information, compared to MRI or genetics, is not warranted. In collaborating with clinical experts, the ability to link any results provided by machine learning to the existing body of research is especially important as it ultimately provides an intuitive or interpretable understanding. Here, interpretable means the possibility for medical experts to translate the insights provided by a statistical model into a working hypothesis relating to brain function. To this end, we propose in our first contribution a method allowing for ultra-sparse regression which is applied on EEG data in order to identify a small subset of important diagnostic markers highlighting the main differences between healthy brains and brains affected by Parkinson's disease. Our second contribution builds on the idea that in Parkinson's disease impaired functioning of the thalamus causes changes in the complexity of the EEG waveforms. The thalamus is a small region in the center of the brain affected early in the course of the disease. Furthermore, it is believed that the thalamus functions as a pacemaker - akin to a conductor of an orchestra - such that changes in complexity are expressed and quantifiable based on EEG. We use these changes in complexity to show their association with future cognitive decline. In our third contribution we propose an extension of archetypal analysis embedded into a deep neural network. This generative version of archetypal analysis allows to learn an appropriate representation where every sample of a data set can be decomposed into a weighted sum of extreme representatives, the so-called archetypes. This opens up an interesting possibility of interpreting a data set relative to its most extreme representatives. In contrast, clustering algorithms describe a data set relative to its most average representatives. For Parkinson's disease, we show based on deep archetypal analysis, that healthy brains produce archetypes which are different from those produced by brains affected by neurodegeneration

    Denoising and enhancement of digital images : variational methods, integrodifferential equations, and wavelets

    Get PDF
    The topics of this thesis are methods for denoising, enhancement, and simplification of digital image data. Special emphasis lies on the relations and structural similarities between several classes of methods which are motivated from different contexts. In particular, one can distinguish the methods treated in this thesis in three classes: For variational approaches and partial differential equations, the notion of the derivative is the tool of choice to model regularity of the data and the desired result. A general framework for such approaches is proposed that involve all partial derivatives of a prescribed order and experimentally are capable of leading to piecewise polynomial approximations of the given data. The second class of methods uses wavelets to represent the data which makes it possible to understand the filtering as very simple pointwise application of a nonlinear function. To view these wavelets as derivatives of smoothing kernels is the basis for relating these methods to integrodifferential equations which are investigated here. In the third case, values of the image in a neighbourhood are averaged where the weights of this averaging can be adapted respecting different criteria. By refinement of the pixel grid and transfer to scaling limits, connections to partial differential equations become visible here, too. They are described in the framework explained before. Numerical aspects of the simplification of images are presented with respect to the NDS energy function, a unifying approach that allows to model many of the aforementioned methods. The behaviour of the filtering methods is documented with numerical examples.Gegenstand der vorliegenden Arbeit sind Verfahren zum Entrauschen, qualitativen Verbessern und Vereinfachen digitaler Bilddaten. Besonderes Augenmerk liegt dabei auf den Beziehungen und der strukturellen Ähnlichkeit zwischen unterschiedlich motivierten Verfahrensklassen. Insbesondere lassen sich die hier behandelten Methoden in drei Klassen einordnen: Bei den Variationsansätzen und partiellen Differentialgleichungen steht der Begriff der Ableitung im Mittelpunkt, um Regularität der Daten und des gewünschten Resultats zu modellieren. Hier wird ein einheitlicher Rahmen für solche Ansätze angegeben, die alle partiellen Ableitungen einer vorgegebenen Ordnung involvieren und experimentell auf stückweise polynomielle Approximationen der gegebenen Daten führen können. Die zweite Klasse von Methoden nutzt Wavelets zur Repräsentation von Daten, mit deren Hilfe sich Filterung als sehr einfache punktweise Anwendung einer nichtlinearen Funktion verstehen lässt. Diese Wavelets als Ableitungen von Glättungskernen aufzufassen bildet die Grundlage für die hier untersuchte Verbindung dieser Verfahren zu Integrodifferentialgleichungen. Im dritten Fall werden Werte des Bildes in einer Nachbarschaft gemittelt, wobei die Gewichtung bei dieser Mittelung adaptiv nach verschiedenen Kriterien angepasst werden kann. Durch Verfeinern des Pixelgitters und Übergang zu Skalierungslimites werden auch hier Verbindungen zu partiellen Differentialgleichungen sichtbar, die in den vorher dargestellten Rahmen eingeordnet werden. Numerische Aspekte beim Vereinfachen von Bildern werden anhand der NDS-Energiefunktion dargestellt, eines einheitlichen Ansatzes, mit dessen Hilfe sich viele der vorgenannten Methoden realisieren lassen. Das Verhalten der einzelnen Filtermethoden wird dabei jeweils durch numerische Beispiele dokumentiert

    Wavelet Theory

    Get PDF
    The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior
    • …
    corecore