142 research outputs found

    Hilbert-Huang Transform: biosignal analysis and practical implementation

    No full text
    Any system, however trivial, is subjected to data analysis on the signals it produces. Over the last 50 years the influx of new techniques and expansions of older ones have allowed a number of new applications, in a variety of fields, to be analysed and to some degree understood. One of the industries that is benefiting from this growth is the medical field and has been further progressed with the growth of interdisciplinary collaboration. From a signal processing perspective, the challenge comes from the complex and sometimes chaotic nature of the signals that we measure from the body, such as those from the brain and to some degree the heart. In this work we will make a contribution to dealing with such systems, in the form of a recent time-frequency data analysis method, the Hilbert-Huang Transform (HHT), and extensions to it. This thesis presents an analysis of the state of the art in seizure and heart arrhythmia detection and prediction methods. We then present a novel real-time implementation of the algorithm both in software and hardware and the motivations for doing so. First, we present our software implementation, encompassing realtime capabilities and identifying elements that need to be considered for practical use. We then translated this software into hardware to aid real-time implementation and integration. With these implementations in place we apply the HHT method to the topic of epilepsy (seizures) and additionally make contributions to heart arrhythmias and neonate brain dynamics. We use the HHT and some additional algorithms to quantify features associated with each application for detection and prediction. We also quantify significance of activity in such a way as to merge prediction and detection into one framework. Finally, we assess the real-time capabilities of our methods for practical use as a biosignal analysis tool

    Learning-Based Modeling of Weather and Climate Events Related To El Niño Phenomenon via Differentiable Programming and Empirical Decompositions

    Get PDF
    This dissertation is the accumulation of the application of adaptive, empirical learning-based methods in the study and characterization of the El Niño Southern Oscillation. In specific, it focuses on ENSO’s effects on rainfall and drought conditions in two major regions shown to be linked through the strength of the dependence of their climate on ENSO: 1) the southern Pacific Coast of the United States and 2) the Nile River Basin. In these cases, drought and rainfall are tied to deep economic and social factors within the region. The principal aim of this dissertation is to establish, with scientific rigor, an epistemological and foundational justification of adaptive learning models and their utility in the both the modeling and understanding of a wide-reaching climate phenomenon such as ENSO. This dissertation explores a scientific justification for their proven accuracy in prediction and utility as an aide in deriving a deeper understanding of climate phenomenon. In the application of drought forecasting for Southern California, adaptive learning methods were able to forecast the drought severity of the 2015-2016 winter with greater accuracy than established models. Expanding this analysis yields novel ways to analyze and understand the underlying processes driving California drought. The pursuit of adaptive learning as a guiding tool would also lead to the discovery of a significant extractable components of ENSO strength variation, which are used with in the analysis of Nile River Basin precipitation and flow of the Nile River, and in the prediction of Nile River yield to p=0.038. In this dissertation, the duality of modeling and understanding is explored, as well as a discussion on why adaptive learning methods are uniquely suited to the study of climate phenomenon like ENSO in the way that traditional methods lack. The main methods explored are 1) differentiable Programming, as a means of construction of novel self-learning models through which the meaningfulness of parameters arises from emergent phenomenon and 2) empirical decompositions, which are driven by an adaptive rather than rigid component extraction principle, are explored further as both a predictive tool and as a tool for gaining insight and the construction of models

    A Statistical Perspective of the Empirical Mode Decomposition

    Get PDF
    This research focuses on non-stationary basis decompositions methods in time-frequency analysis. Classical methodologies in this field such as Fourier Analysis and Wavelet Transforms rely on strong assumptions of the underlying moment generating process, which, may not be valid in real data scenarios or modern applications of machine learning. The literature on non-stationary methods is still in its infancy, and the research contained in this thesis aims to address challenges arising in this area. Among several alternatives, this work is based on the method known as the Empirical Mode Decomposition (EMD). The EMD is a non-parametric time-series decomposition technique that produces a set of time-series functions denoted as Intrinsic Mode Functions (IMFs), which carry specific statistical properties. The main focus is providing a general and flexible family of basis extraction methods with minimal requirements compared to those within the Fourier or Wavelet techniques. This is highly important for two main reasons: first, more universal applications can be taken into account; secondly, the EMD has very little a priori knowledge of the process required to apply it, and as such, it can have greater generalisation properties in statistical applications across a wide array of applications and data types. The contributions of this work deal with several aspects of the decomposition. The first set regards the construction of an IMF from several perspectives: (1) achieving a semi-parametric representation of each basis; (2) extracting such semi-parametric functional forms in a computationally efficient and statistically robust framework. The EMD belongs to the class of path-based decompositions and, therefore, they are often not treated as a stochastic representation. (3) A major contribution involves the embedding of the deterministic pathwise decomposition framework into a formal stochastic process setting. One of the assumptions proper of the EMD construction is the requirement for a continuous function to apply the decomposition. In general, this may not be the case within many applications. (4) Various multi-kernel Gaussian Process formulations of the EMD will be proposed through the introduced stochastic embedding. Particularly, two different models will be proposed: one modelling the temporal mode of oscillations of the EMD and the other one capturing instantaneous frequencies location in specific frequency regions or bandwidths. (5) The construction of the second stochastic embedding will be achieved with an optimisation method called the cross-entropy method. Two formulations will be provided and explored in this regard. Application on speech time-series are explored to study such methodological extensions given that they are non-stationary

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Flood Forecasting Using Machine Learning Methods

    Get PDF
    This book is a printed edition of the Special Issue Flood Forecasting Using Machine Learning Methods that was published in Wate

    Load forecasting on the user‐side by means of computational intelligence algorithms

    Get PDF
    Nowadays, it would be very difficult to deny the need to prioritize sustainable development through energy efficiency at all consumption levels. In this context, an energy management system (EMS) is a suitable option for continuously improving energy efficiency, particularly on the user side. An EMS is a set of technological tools that manages energy consumption information and allows its analysis. EMS, in combination with information technologies, has given rise to intelligent EMS (iEMS), which, aside from lending support to monitoring and reporting functions as an EMS does, it has the ability to model, forecast, control and diagnose energy consumption in a predictive way. The main objective of an iEMS is to continuously improve energy efficiency (on-line) as automatically as possible. The core of an iEMS is its load modeling forecasting system (LMFS). It takes advantage of historical information on energy consumption and energy-related variables in order to model and forecast load profiles and, if available, generator profiles. These models and forecasts are the main information used for iEMS applications for control and diagnosis. That is why in this thesis we have focused on the study, analysis and development of LMFS on the user side. The fact that the LMFS is applied on the user side to support an iEMS means that specific characteristics are required that in other areas of load forecasting they are not. First of all, the user-side load profiles (LPs) have a higher random behavior than others, as for example, in power system distribution or generation. This makes the modeling and forecasting process more difficult. Second, on the user side --for example an industrial user-- there is a high number and variety of places that can be monitored, modeled and forecasted, as well as their precedence or nature. Thus, on the one hand, an LMFS requires a high degree of autonomy to automatically or autonomously generate the demanded models. And on the other hand, it needs a high level of adaptability in order to be able to model and forecast different types of loads and different types of energies. Therefore, the addressed LMFS are those that do not look only for accuracy, but also adaptability and autonomy. Seeking to achieve these objectives, in this thesis work we have proposed three novel LMFS schemes based on hybrid algorithms from computational intelligence, signal processing and statistical theory. The first of them looked to improve adaptability, keeping in mind the importance of accuracy and autonomy. It was called an evolutionary training algorithm (ETA) and is based on adaptivenetwork-based-fuzzy-inference system (ANFIS) that is trained by a multi-objective genetic algorithm instead of its traditional training algorithm. As a result of this hybrid, the generalization capacity was improved (avoiding overfitting) and an easily adaptable training algorithm for new adaptive networks based on traditional ANFIS was obtained. The second scheme deals with LMF autonomy in order to build models from multiple loads automatically. Similar to the previous proposal, an ANFIS and a MOGA were used. In this case, the MOGA was used to find a near-optimal configuration for the ANFIS instead of training it. The LMFS relies on this configuration to work properly, as well as to maintain accuracy and generalization capabilities. Real data from an industrial scenario were used to test the proposed scheme and the multi-site modeling and self-configuration results were satisfactory. Furthermore, other algorithms were satisfactorily designed and tested for processing raw data in outlier detection and gap padding. The last of the proposed approaches sought to improve accuracy while keeping autonomy and adaptability. It took advantage of dominant patterns (DPs) that have lower time resolution than the target LP, so they are easier to model and forecast. The Hilbert-Huang transform and Hilbert-spectral analysis were used for detecting and selecting the DPs. Those selected were used in a proposed scheme of partial models (PM) based on parallel ANFIS or artificial neural networks (ANN) to extract the information and give it to the main PM. Therefore, LMFS accuracy improved and the user-side LP noising problem was reduced. Additionally, in order to compensate for the added complexity, versions of self-configured sub-LMFS for each PM were used. This point was fundamental since, the better the configuration, the better the accuracy of the model; and subsequently the information provided to the main partial model was that much better. Finally, and to close this thesis, an outlook of trends regarding iEMS and an outline of several hybrid algorithms that are pending study and testing are presented.En el contexto energético actual y particularmente en el lado del usuario, el concepto de sistema de gestión energética (EMS) se presenta como una alternativa apropiada para mejorar continuamente la eficiencia energética. Los EMSs en combinación con las tecnologías informáticas dan origen al concepto de iEMS, que además de soportar las funciones de los EMS, tienen la capacidad de modelar, pronosticar, controlar y supervisar los consumos energéticos. Su principal objetivo es el de realizar una mejora continua, lo más autónoma posible y predictiva de la eficiencia energética. Este tipo de sistemas tienen como núcleo fundamental el sistema de modelado y pronóstico de consumos (Load Modeling and Forecasting System, LMFS). El LMFS está habilitado para pronosticar el comportamiento futuro de cargas y, si es necesario, de generadores. Es sobre estos pronósticos sobre los cuales el iEMS puede realizar sus tareas automáticas y predictivas de optimización y supervisión. Los LMFS en el lado del usuario son el foco de esta tesis. Un LMFS en el lado del usuario, diseñado para soportar un iEMS requiere o demanda ciertas características que en otros contextos no serían tan necesarias. En primera estancia, los perfiles de los usuarios tienen un alto grado de aleatoriedad que los hace más difíciles de pronosticar. Segundo, en el lado del usuario, por ejemplo en la industria, el gran número de puntos a modelar requiere que el LMFS tenga por un lado, un nivel elevado de autonomía para generar de la manera más desatendida posible los modelos. Por otro lado, necesita un nivel elevado de adaptabilidad para que, usando la misma estructura o metodología, pueda modelar diferentes tipos de cargas cuya procedencia pude variar significativamente. Por lo tanto, los sistemas de modelado abordados en esta tesis son aquellos que no solo buscan mejorar la precisión, sino también la adaptabilidad y autonomía. En busca de estos objetivos y soportados principalmente por algoritmos de inteligencia computacional, procesamiento de señales y estadística, hemos propuesto tres algoritmos novedosos para el desarrollo de un LMFS en el lado del usuario. El primero de ellos busca mejorar la adaptabilidad del LMFS manteniendo una buena precisión y capacidad de autonomía. Denominado ETA, consiste del uso de una estructura ANFIS que es entrenada por un algoritmo genético multi objetivo (MOGA). Como resultado de este híbrido, obtenemos un algoritmo con excelentes capacidades de generalización y fácil de adaptar para el entrenamiento y evaluación de nuevas estructuras adaptativas basadas en ANFIS. El segundo de los algoritmos desarrollados aborda la autonomía del LMFS para así poder generar modelos de múltiples cargas. Al igual que en la anterior propuesta usamos un ANFIS y un MOGA, pero esta vez el MOGA en vez de entrenar el ANFIS, se utiliza para encontrar la configuración cuasi-óptima del ANFIS. Encontrar la configuración apropiada de un ANFIS es muy importante para obtener un buen funcionamiento del LMFS en lo que a precisión y generalización respecta. El LMFS propuesto, además de configurar automáticamente el ANFIS, incluyó diversos algoritmos para procesar los datos puros que casi siempre estuvieron contaminados de datos espurios y gaps de información, operando satisfactoriamente en las condiciones de prueba en un escenario real. El tercero y último de los algoritmos buscó mejorar la precisión manteniendo la autonomía y adaptabilidad, aprovechando para ello la existencia de patrones dominantes de más baja resolución temporal que el consumo objetivo, y que son más fáciles de modelar y pronosticar. La metodología desarrollada se basa en la transformada de Hilbert-Huang para detectar y seleccionar tales patrones dominantes. Además, esta metodología define el uso de modelos parciales de los patrones dominantes seleccionados, para mejorar la precisión del LMFS y mitigar el problema de aleatoriedad que afecta a los consumos en el lado del usuario. Adicionalmente, se incorporó el algoritmo de auto configuración que se presentó en la propuesta anterior para hallar la configuración cuasi-óptima de los modelos parciales. Este punto fue crucial puesto que a mejor configuración de los modelos parciales mayor es la mejora en precisión del pronóstico final. Finalmente y para cerrar este trabajo de tesis, se realizó una prospección de las tendencias en cuanto al uso de iEMS y se esbozaron varias propuestas de algoritmos híbridos, cuyo estudio y comprobación se plantea en futuros estudios
    corecore