8,448 research outputs found

    On the potential of empirical mode decomposition for RFI mitigation in microwave radiometry

    Get PDF
    Radio-frequency interference (RFI) is an increasing problem particularly for Earth observation using microwave radiometry. RFI has been observed, for example, at L-band by the European Space Agency’s (ESA’s) soil moisture and ocean salinity (SMOS) Earth Explorer and by National Aeronautics and Space Administration’s (NASA’s) soil moisture active passive (SMAP) and Aquarius missions, as well as at C-band by Advanced Microwave Scanning Radiometer (AMSR)-E and AMSR-2, and at 10.7 and 18.7 GHz by AMSR-E, AMSR-2, WindSat, and GPM Microwave Imager (GMI). Therefore, systems dedicated to interference detection and removal of contaminated measurements are nowadays a must in order to improve radiometric accuracy and reduce the loss of spatial coverage caused by interference. In this work, the feasibility of using the empirical mode decomposition (EMD) technique for RFI mitigation is explored. The EMD, also known as Hilbert–Huang transform (HHT), is an algorithm that decomposes the signal into intrinsic mode functions (IMFs). The achieved performance is analyzed, and the opportunities and caveats that this type of methods present are described. EMD is found to be a practical RFI mitigation method, albeit presenting some limitations and considerable complexity. Nevertheless, in some conditions, EMD exhibits a better performance than other commonly used methods (such as frequency binning). In particular, it has been found that EMD performs well for RFI affecting the <25% lower part of the intermediate frequency (IF) bandwidth.This work was supported in part by the Sensing With Pioneering Opportunistic Techniques (SPOT) under Grant RTI2018-099008-B-C21/ AEI/10.13039/501100011033, in part by the RYC-2016-20918 under Grant MCIN/AEI/10.13039/501100011033, and in part by the European Social Fund (ESF), Investing in your future.Peer ReviewedPostprint (author's final draft

    Data-Driven Forecasting of High-Dimensional Chaotic Systems with Long Short-Term Memory Networks

    Full text link
    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.Comment: 31 page

    Resilience for large ensemble computations

    Get PDF
    With the increasing power of supercomputers, ever more detailed models of physical systems can be simulated, and ever larger problem sizes can be considered for any kind of numerical system. During the last twenty years the performance of the fastest clusters went from the teraFLOPS domain (ASCI RED: 2.3 teraFLOPS) to the pre-exaFLOPS domain (Fugaku: 442 petaFLOPS), and we will soon have the first supercomputer with a peak performance cracking the exaFLOPS (El Capitan: 1.5 exaFLOPS). Ensemble techniques experience a renaissance with the availability of those extreme scales. Especially recent techniques, such as particle filters, will benefit from it. Current ensemble methods in climate science, such as ensemble Kalman filters, exhibit a linear dependency between the problem size and the ensemble size, while particle filters show an exponential dependency. Nevertheless, with the prospect of massive computing power come challenges such as power consumption and fault-tolerance. The mean-time-between-failures shrinks with the number of components in the system, and it is expected to have failures every few hours at exascale. In this thesis, we explore and develop techniques to protect large ensemble computations from failures. We present novel approaches in differential checkpointing, elastic recovery, fully asynchronous checkpointing, and checkpoint compression. Furthermore, we design and implement a fault-tolerant particle filter with pre-emptive particle prefetching and caching. And finally, we design and implement a framework for the automatic validation and application of lossy compression in ensemble data assimilation. Altogether, we present five contributions in this thesis, where the first two improve state-of-the-art checkpointing techniques, and the last three address the resilience of ensemble computations. The contributions represent stand-alone fault-tolerance techniques, however, they can also be used to improve the properties of each other. For instance, we utilize elastic recovery (2nd contribution) for mitigating resiliency in an online ensemble data assimilation framework (3rd contribution), and we built our validation framework (5th contribution) on top of our particle filter implementation (4th contribution). We further demonstrate that our contributions improve resilience and performance with experiments on various architectures such as Intel, IBM, and ARM processors.Amb l’increment de les capacitats de còmput dels supercomputadors, es poden simular models de sistemes físics encara més detallats, i es poden resoldre problemes de més grandària en qualsevol tipus de sistema numèric. Durant els últims vint anys, el rendiment dels clústers més ràpids ha passat del domini dels teraFLOPS (ASCI RED: 2.3 teraFLOPS) al domini dels pre-exaFLOPS (Fugaku: 442 petaFLOPS), i aviat tindrem el primer supercomputador amb un rendiment màxim que sobrepassa els exaFLOPS (El Capitan: 1.5 exaFLOPS). Les tècniques d’ensemble experimenten un renaixement amb la disponibilitat d’aquestes escales tan extremes. Especialment les tècniques més noves, com els filtres de partícules, se¿n beneficiaran. Els mètodes d’ensemble actuals en climatologia, com els filtres d’ensemble de Kalman, exhibeixen una dependència lineal entre la mida del problema i la mida de l’ensemble, mentre que els filtres de partícules mostren una dependència exponencial. No obstant, juntament amb les oportunitats de poder computar massivament, apareixen desafiaments com l’alt consum energètic i la necessitat de tolerància a errors. El temps de mitjana entre errors es redueix amb el nombre de components del sistema, i s’espera que els errors s’esdevinguin cada poques hores a exaescala. En aquesta tesis, explorem i desenvolupem tècniques per protegir grans càlculs d’ensemble d’errors. Presentem noves tècniques en punts de control diferencials, recuperació elàstica, punts de control totalment asincrònics i compressió de punts de control. A més, dissenyem i implementem un filtre de partícules tolerant a errors amb captació i emmagatzematge en caché de partícules de manera preventiva. I finalment, dissenyem i implementem un marc per la validació automàtica i l’aplicació de compressió amb pèrdua en l’assimilació de dades d’ensemble. En total, en aquesta tesis presentem cinc contribucions, les dues primeres de les quals milloren les tècniques de punts de control més avançades, mentre que les tres restants aborden la resiliència dels càlculs d’ensemble. Les contribucions representen tècniques independents de tolerància a errors; no obstant, també es poden utilitzar per a millorar les propietats de cadascuna. Per exemple, utilitzem la recuperació elàstica (segona contribució) per a mitigar la resiliència en un marc d’assimilació de dades d’ensemble en línia (tercera contribució), i construïm el nostre marc de validació (cinquena contribució) sobre la nostra implementació del filtre de partícules (quarta contribució). A més, demostrem que les nostres contribucions milloren la resiliència i el rendiment amb experiments en diverses arquitectures, com processadors Intel, IBM i ARM.Postprint (published version

    Data Assimilation for Wildland Fires: Ensemble Kalman filters in coupled atmosphere-surface models

    Full text link
    Two wildland fire models are described, one based on reaction-diffusion-convection partial differential equations, and one based on semi-empirical fire spread by the level let method. The level set method model is coupled with the Weather Research and Forecasting (WRF) atmospheric model. The regularized and the morphing ensemble Kalman filter are used for data assimilation.Comment: Minor revision, except description of the model expanded. 29 pages, 9 figures, 53 reference

    The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting

    Get PDF
    The numerous recent breakthroughs in machine learning (ML) make imperative to carefully ponder how the scientific community can benefit from a technology that, although not necessarily new, is today living its golden age. This Grand Challenge review paper is focused on the present and future role of machine learning in space weather. The purpose is twofold. On one hand, we will discuss previous works that use ML for space weather forecasting, focusing in particular on the few areas that have seen most activity: the forecasting of geomagnetic indices, of relativistic electrons at geosynchronous orbits, of solar flares occurrence, of coronal mass ejection propagation time, and of solar wind speed. On the other hand, this paper serves as a gentle introduction to the field of machine learning tailored to the space weather community and as a pointer to a number of open challenges that we believe the community should undertake in the next decade. The recurring themes throughout the review are the need to shift our forecasting paradigm to a probabilistic approach focused on the reliable assessment of uncertainties, and the combination of physics-based and machine learning approaches, known as gray-box.Comment: under revie

    New Tools for Decomposition of Sea Floor Pressure Data - A Practical Comparison of Modern and Classical Approaches

    Full text link
    In recent years more and more long-term broadband data sets are collected in geosciences. Therefore there is an urgent need of algorithms which semi-automatically analyse and decompose these data into separate periods which are associated with different processes. Often Fourier and Wavelet Transform is used to decompose the data into short and long period effects but these fail often because of their simplicity. In this paper we investigate the novel approaches Empircial Mode Decomposition and Sparse Decomposition for long-term sea floor pressure data analysis and compare them with the classical ones. Our results indicate that none of the methods fulfils all the requirements but Sparse Decomposition performed best except for computing efficiency.Comment: 20 pages, 2 tables, 7 figure
    corecore