312 research outputs found

    Employing data fusion & diversity in the applications of adaptive signal processing

    Get PDF
    The paradigm of adaptive signal processing is a simple yet powerful method for the class of system identification problems. The classical approaches consider standard one-dimensional signals whereby the model can be formulated by flat-view matrix/vector framework. Nevertheless, the rapidly increasing availability of large-scale multisensor/multinode measurement technology has render no longer sufficient the traditional way of representing the data. To this end, the author, who from this point onward shall be referred to as `we', `us', and `our' to signify the author myself and other supporting contributors i.e. my supervisor, my colleagues and other overseas academics specializing in the specific pieces of research endeavor throughout this thesis, has applied the adaptive filtering framework to problems that employ the techniques of data diversity and fusion which includes quaternions, tensors and graphs. At the first glance, all these structures share one common important feature: invertible isomorphism. In other words, they are algebraically one-to-one related in real vector space. Furthermore, it is our continual course of research that affords a segue of all these three data types. Firstly, we proposed novel quaternion-valued adaptive algorithms named the n-moment widely linear quaternion least mean squares (WL-QLMS) and c-moment WL-LMS. Both are as fast as the recursive-least-squares method but more numerically robust thanks to the lack of matrix inversion. Secondly, the adaptive filtering method is applied to a more complex task: the online tensor dictionary learning named online multilinear dictionary learning (OMDL). The OMDL is partly inspired by the derivation of the c-moment WL-LMS due to its parsimonious formulae. In addition, the sequential higher-order compressed sensing (HO-CS) is also developed to couple with the OMDL to maximally utilize the learned dictionary for the best possible compression. Lastly, we consider graph random processes which actually are multivariate random processes with spatiotemporal (or vertex-time) relationship. Similar to tensor dictionary, one of the main challenges in graph signal processing is sparsity constraint in the graph topology, a challenging issue for online methods. We introduced a novel splitting gradient projection into this adaptive graph filtering to successfully achieve sparse topology. Extensive experiments were conducted to support the analysis of all the algorithms proposed in this thesis, as well as pointing out potentials, limitations and as-yet-unaddressed issues in these research endeavor.Open Acces

    Prediction of nonlinear nonstationary time series data using a digital filter and support vector regression

    No full text
    Volatility is a key parameter when measuring the size of the errors made in modelling returns and other nonlinear nonstationary time series data. The Autoregressive Integrated Moving- Average (ARIMA) model is a linear process in time series; whilst in the nonlinear system, the Generalised Autoregressive Conditional Heteroskedasticity (GARCH) and Markov Switching GARCH (MS-GARCH) models have been widely applied. In statistical learning theory, Support Vector Regression (SVR) plays an important role in predicting nonlinear and nonstationary time series data. We propose a new class model comprised of a combination of a novel derivative Empirical Mode Decomposition (EMD), averaging intrinsic mode function (aIMF) and a novel of multiclass SVR using mean reversion and coefficient of variance (CV) to predict financial data i.e. EUR-USD exchange rates. The proposed novel aIMF is capable of smoothing and reducing noise, whereas the novel of multiclass SVR model can predict exchange rates. Our simulation results show that our model significantly outperforms simulations by state-of-art ARIMA, GARCH, Markov Switching generalised Autoregressive conditional Heteroskedasticity (MS-GARCH), Markov Switching Regression (MSR) models and Markov chain Monte Carlo (MCMC) regression.Open Acces

    Persistence in complex systems

    Get PDF
    Persistence is an important characteristic of many complex systems in nature, related to how long the system remains at a certain state before changing to a different one. The study of complex systems’ persistence involves different definitions and uses different techniques, depending on whether short-term or long-term persistence is considered. In this paper we discuss the most important definitions, concepts, methods, literature and latest results on persistence in complex systems. Firstly, the most used definitions of persistence in short-term and long-term cases are presented. The most relevant methods to characterize persistence are then discussed in both cases. A complete literature review is also carried out. We also present and discuss some relevant results on persistence, and give empirical evidence of performance in different detailed case studies, for both short-term and long-term persistence. A perspective on the future of persistence concludes the work.This research has been partially supported by the project PID2020-115454GB-C21 of the Spanish Ministry of Science and Innovation (MICINN). This research has also been partially supported by Comunidad de Madrid, PROMINT-CM project (grant ref: P2018/EMT-4366). J. Del Ser would like to thank the Basque Government for its funding support through the EMAITEK and ELKARTEK programs (3KIA project, KK-2020/00049), as well as the consolidated research group MATHMODE (ref. T1294-19). GCV work is supported by the European Research Council (ERC) under the ERC-CoG-2014 SEDAL Consolidator grant (grant agreement 647423)

    Persistence in complex systems

    Get PDF
    Persistence is an important characteristic of many complex systems in nature, related to how long the system remains at a certain state before changing to a different one. The study of complex systems' persistence involves different definitions and uses different techniques, depending on whether short-term or long-term persistence is considered. In this paper we discuss the most important definitions, concepts, methods, literature and latest results on persistence in complex systems. Firstly, the most used definitions of persistence in short-term and long-term cases are presented. The most relevant methods to characterize persistence are then discussed in both cases. A complete literature review is also carried out. We also present and discuss some relevant results on persistence, and give empirical evidence of performance in different detailed case studies, for both short-term and long-term persistence. A perspective on the future of persistence concludes the work.This research has been partially supported by the project PID2020-115454GB-C21 of the Spanish Ministry of Science and Innovation (MICINN). This research has also been partially supported by Comunidad de Madrid, PROMINT-CM project (grant ref: P2018/EMT-4366). J. Del Ser would like to thank the Basque Government for its funding support through the EMAITEK and ELKARTEK programs (3KIA project, KK-2020/00049), as well as the consolidated research group MATHMODE (ref. T1294-19). GCV work is supported by the European Research Council (ERC) under the ERC-CoG-2014 SEDAL Consolidator grant (grant agreement 647423)

    STOCHASTIC SEASONAL MODELS FOR GLUCOSE PREDICTION IN TYPE 1 DIABETES

    Full text link
    [ES] La diabetes es un importante problema de salud mundial, siendo una de las enfermedades no transmisibles más graves después de las enfermedades cardiovasculares, el cáncer y las enfermedades respiratorias crónicas. La prevalencia de la diabetes ha aumentado constantemente en las últimas décadas, especialmente en países de ingresos bajos y medios. Se estima que 425 millones de personas en todo el mundo tenían diabetes en 2017, y para 2045 este número puede aumentar a 629 millones. Alrededor del 10% de las personas con diabetes padecen diabetes tipo 1, caracterizada por una destrucción autoinmune de las células beta en el páncreas, responsables de la secreción de la hormona insulina. Sin insulina, la glucosa plasmática aumenta a niveles nocivos, provocando complicaciones vasculares a largo plazo. Hasta que se encuentre una cura, el manejo de la diabetes depende de los avances tecnológicos para terapias de reemplazo de insulina. Con la llegada de los monitores continuos de glucosa, la tecnología ha evolucionado hacia sistemas automatizados. Acuñados como "páncreas artificial", los dispositivos de control de glucosa en lazo cerrado suponen hoy en día un cambio de juego en el manejo de la diabetes. La investigación en las últimas décadas ha sido intensa, dando lugar al primer sistema comercial a fines de 2017, y muchos más están siendo desarrollados por las principales industrias de dispositivos médicos. Sin embargo, como dispositivo de primera generación, muchos problemas aún permanecen abiertos y nuevos avances tecnológicos conducirán a mejoras del sistema para obtener mejores resultados de control glucémico y reducir la carga del paciente, mejorando significativamente la calidad de vida de las personas con diabetes tipo 1. En el centro de cualquier sistema de páncreas artificial se encuentra la predicción de glucosa, tema abordado en esta tesis. La capacidad de predecir la glucosa a lo largo de un horizonte de predicción dado, y la estimación de las tendencias futuras de glucosa, es la característica más importante de cualquier sistema de páncreas artificial, para poder tomar medidas preventivas que eviten por completo el riesgo para el paciente. La predicción de glucosa puede aparecer como parte del algoritmo de control en sí, como en sistemas basados en técnicas de control predictivo basado en modelo (MPC), o como parte de un sistema de supervisión para evitar episodios de hipoglucemia. Sin embargo, predecir la glucosa es un problema muy desafiante debido a la gran variabilidad inter e intra-sujeto que sufren los pacientes, cuyas fuentes solo se entienden parcialmente. Esto limita las prestaciones predictivas de los modelos, imponiendo horizontes de predicción relativamente cortos, independientemente de la técnica de modelado utilizada (modelos fisiológicos, basados en datos o híbridos). La hipótesis de partida de esta tesis es que la complejidad de la dinámica de la glucosa requiere la capacidad de caracterizar grupos de comportamientos en los datos históricos del paciente que llevan naturalmente al concepto de modelado local. Además, la similitud de las respuestas en un grupo puede aprovecharse aún más para introducir el concepto clásico de estacionalidad en la predicción de glucosa. Como resultado, los modelos locales estacionales están en el centro de esta tesis. Se utilizan varias bases de datos clínicas que incluyen comidas mixtas y ejercicio para demostrar la viabilidad y superioridad de las prestaciones de este enfoque.[CA] La diabetisés un important problema de salut mundial, sent una de les malalties no transmissibles més greus després de les malalties cardiovasculars, el càncer i les malalties respiratòries cròniques. La prevalença de la diabetis ha augmentat constantment en les últimes dècades, especialment en països d'ingressos baixos i mitjans. S'estima que 425 milions de persones a tot el món tenien diabetis en 2017, i per 2045 aquest nombre pot augmentar a 629 milions. Al voltant del 10% de les persones amb diabetis pateixen diabetis tipus 1, caracteritzada per una destrucció autoimmune de les cèl·lules beta en el pàncrees, responsables de la secreció de l'hormona insulina. Sense insulina, la glucosa plasmàtica augmenta a nivells nocius, provocant complicacions vasculars a llarg termini. Fins que es trobi una cura, el maneig de la diabetis depén dels avenços tecnològics per a teràpies de reemplaçament d'insulina. Amb l'arribada dels monitors continus de glucosa, la tecnologia ha evolucionat cap a sistemes automatitzats. Encunyats com "pàncrees artificial", els dispositius de control de glucosa en llaç tancat suposen avui dia un canvi de joc en el maneig de la diabetis. La investigació en les últimes dècades ha estat intensa, donant lloc al primer sistema comercial a finals de 2017, i molts més estan sent desenvolupats per les principals indústries de dispositius mèdics. No obstant això, com a dispositiu de primera generació, molts problemes encara romanen oberts i nous avenços tecnològics conduiran a millores del sistema per obtenir millors resultats de control glucèmic i reduir la càrrega del pacient, millorant significativament la qualitat de vida de les persones amb diabetis tipus 1. Al centre de qualsevol sistema de pàncrees artificial es troba la predicció de glucosa, tema abordat en aquesta tesi. La capacitat de predir la glucosa al llarg d'un horitzó de predicció donat, i l'estimació de les tendències futures de glucosa, és la característica més important de qualsevol sistema de pàncrees artificial, per poder prendre mesures preventives que evitin completament el risc per el pacient. La predicció de glucosa pot aparèixer com a part de l'algoritme de control en si, com en sistemes basats en técniques de control predictiu basat en model (MPC), o com a part d'un sistema de supervisió per evitar episodis d'hipoglucèmia. No obstant això, predir la glucosa és un problema molt desafiant degut a la gran variabilitat inter i intra-subjecte que pateixen els pacients, les fonts només s'entenen parcialment. Això limita les prestacions predictives dels models, imposant horitzons de predicció relativament curts, independentment de la tècnica de modelatge utilitzada (models fisiològics, basats en dades o híbrids). La hipòtesi de partida d'aquesta tesi és que la complexitat de la dinàmica de la glucosa requereix la capacitat de caracteritzar grups de comportaments en les dades històriques del pacient que porten naturalment al concepte de modelatge local. A més, la similitud de les respostes en un grup pot aprofitar-se encara més per introduir el concepte clàssic d'estacionalitat en la predicció de glucosa. Com a resultat, els models locals estacionals estan al centre d'aquesta tesi. S'utilitzen diverses bases de dades clíniques que inclouen menjars mixtes i exercici per demostrar la viabilitat i superioritat de les prestacions d'aquest enfocament.[EN] Diabetes is a significant global health problem, one of the most serious noncommunicable diseases after cardiovascular diseases, cancer and chronic respiratory diseases. Diabetes prevalence has been steadily increasing over the past decades, especially in low- and middle-income countries. It is estimated that 425 million people worldwide had diabetes in 2017, and by 2045 this number may rise to 629 million. About 10% of people with diabetes suffer from type 1 diabetes, characterized by autoimmune destruction of the beta-cells in the pancreas, responsible for the secretion of the hormone insulin. Without insulin, plasma glucose rises to deleterious levels, provoking long-term vascular complications. Until a cure is found, the management of diabetes relies on technological developments for insulin replacement therapies. With the advent of continuous glucose monitors, technology has been evolving towards automated systems. Coined as "artificial pancreas", closed-loop glucose control devices are nowadays a game-changer in diabetes management. Research in the last decades has been intense, yielding a first commercial system in late 2017 and many more are in the pipeline of the main medical devices industry. However, as a first-generation device, many issues still remain open and new technological advancements will lead to system improvements for better glycemic control outputs and reduced patient's burden, improving significantly the quality of life of people with type 1 diabetes. At the core of any artificial pancreas system is glucose prediction, the topic addressed in this thesis. The ability to predict glucose along a given prediction horizon, and estimation of future glucose trends, is the most important feature of any artificial pancreas system, in order to be able to take preventive actions to entirely avoid risk to the patient. Glucose prediction can appear as part of the control algorithm itself, such as in systems based on model predictive control (MPC) techniques, or as part of a monitoring system to avoid hypoglycemic episodes. However, predicting glucose is a very challenging problem due to the large inter- and intra-subject variability that patients suffer, whose sources are only partially understood. These limits models forecasting performance, imposing relatively short prediction horizons, despite the modeling technique used (physiological, data-driven or hybrid approaches). The starting hypothesis of this thesis is that the complexity of glucose dynamics requires the ability to characterize clusters of behaviors in the patient's historical data naturally yielding to the concept of local modeling. Besides, the similarity of responses in a cluster can be further exploited to introduce the classical concept of seasonality into glucose prediction. As a result, seasonal local models are at the core of this thesis. Several clinical databases including mixed meals and exercise are used to demonstrate the feasibility and superiority of the performance of this approach.This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under the FPI grant BES-2014-069253 and projects DPI2013-46982-C2-1-R and DPI2016-78831-C2-1-R. Moreover, with relation to this grant, a short stay was done at the end of 2017 at the Illinois Institute of Technology, Chicago, United States of America, under the supervision of Prof. Ali Cinar, for four months from 01/09/2017 to 29/12/2017.Montaser Roushdi Ali, E. (2020). STOCHASTIC SEASONAL MODELS FOR GLUCOSE PREDICTION IN TYPE 1 DIABETES [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/136574TESI

    Extracting heart rate dependent electrocardiogram templates for a body emulator environment

    Get PDF
    Abstract. Medical device and analysis method developments often include tests on humans, which are expensive, time consuming, and sometimes even dangerous. In order to perform human tests, special safety conditions and ethical and legal requirements must be taken into account. Emulators that can emulate the physiological functions of the human body could solve these difficulties. In this study, the heart rate depended electrocardiogram templates for this kind of an emulator were extracted. The real-life electrocardiogram preprocessing included a high-pass filter and a Savitzky-Golay filter. A beat detection algorithm was developed to detect QRS complexes in the signals and classify beat artefacts based on the RR interval sequences and two adaptive thresholds. Heart rate levels were detected using the K-means clustering technique. Vectorcardiogram signals were converted from the electrocardiogram signals using the inverse Dower’s transformation matrix, and vectorcardiogram templates were extracted to the respective heart rate levels. Finally, a graphical user interface was created for the mentioned methods. The developed beat detection algorithm was tested with the MIT-BIH Arrhythmia Database and the comparison was made with the state-of-the-art algorithms. The beat detection algorithm resulted the sensitivity of 99.77 \%, precision of 99.65 \%, and detection error rate of 0.58 \%. Based on the results, the proposed methods and extracted vectorcardiogram templates were successful.Sykkeestä riippuvien elektrokardiogrammimallien poiminta kehoemulaattoriympäristöön. Tiivistelmä. Lääketieteellisten laitteiden ja analyysimenetelmien kehitystyö sisältää usein ihmisille suoritettavia testejä, jotka ovat kalliita, aikaa vieviä ja joskus jopa vaarallisia. Ihmiskokeiden toteuttamiseksi on otettava huomioon erityisiä turvallisuusehtoja, sekä eettisiä ja laillisia vaatimuksia. Emulaattorit, jotka pystyvät jäljittelemään ihmiskehon fysiologisia toimintoja, voivat olla ratkaisu näihin ongelmiin. Tässä tutkimuksessa sykkeestä riippuvia elektrokardiogrammimalleja poimittiin tämän tyyppiselle emulaattorille. Tosielämän elektrokardiogrammisignaalien esikäsittely sisälsi ylipäästösuodattimen ja Savitzky-Golay suodattimen. Sydämen lyöntien tunnistussalgoritmi kehitettiin tunnistamaan QRS-komplekseja signaaleista ja luokittelemaan lyöntiartefakteja RR-intervallisekvenssien ja kahden adaptiivisen kynnysarvon perusteella. Syketasot tunnistettiin käyttämällä K-means klusterointitekniikkaa. Vektorikardiogrammisignaalit muunnettiin elektrokardiogrammisignaaleista käyttämällä käänteistä Dowerin muunnosmatriisia ja vektorikardiogrammimallit poimittiin vastaaville syketasoille. Lopuksi luotiin graafnen käyttöliittymä mainituille menetelmille. Kehitetty lyöntien tunnistusalgoritmi testattiin MIT-BIH Arrhythmia Database-tietokannalla ja vertailu suoritettiin vastaavien algoritmien kanssa. Algoritmi suoriutui 99,77 % herkkyydellä, 99,65 % spesifsyydellä ja 0,58 % virheprosentilla. Tulosten perusteella ehdotetut menetelmät ja poimitut vektorikardiogrammimallit olivat onnistuneita

    Geometric Learning on Graph Structured Data

    Get PDF
    Graphs provide a ubiquitous and universal data structure that can be applied in many domains such as social networks, biology, chemistry, physics, and computer science. In this thesis we focus on two fundamental paradigms in graph learning: representation learning and similarity learning over graph-structured data. Graph representation learning aims to learn embeddings for nodes by integrating topological and feature information of a graph. Graph similarity learning brings into play with similarity functions that allow to compute similarity between pairs of graphs in a vector space. We address several challenging issues in these two paradigms, designing powerful, yet efficient and theoretical guaranteed machine learning models that can leverage rich topological structural properties of real-world graphs. This thesis is structured into two parts. In the first part of the thesis, we will present how to develop powerful Graph Neural Networks (GNNs) for graph representation learning from three different perspectives: (1) spatial GNNs, (2) spectral GNNs, and (3) diffusion GNNs. We will discuss the model architecture, representational power, and convergence properties of these GNN models. Specifically, we first study how to develop expressive, yet efficient and simple message-passing aggregation schemes that can go beyond the Weisfeiler-Leman test (1-WL). We propose a generalized message-passing framework by incorporating graph structural properties into an aggregation scheme. Then, we introduce a new local isomorphism hierarchy on neighborhood subgraphs. We further develop a novel neural model, namely GraphSNN, and theoretically prove that this model is more expressive than the 1-WL test. After that, we study how to build an effective and efficient graph convolution model with spectral graph filters. In this study, we propose a spectral GNN model, called DFNets, which incorporates a novel spectral graph filter, namely feedback-looped filters. As a result, this model can provide better localization on neighborhood while achieving fast convergence and linear memory requirements. Finally, we study how to capture the rich topological information of a graph using graph diffusion. We propose a novel GNN architecture with dynamic PageRank, based on a learnable transition matrix. We explore two variants of this GNN architecture: forward-euler solution and invariable feature solution, and theoretically prove that our forward-euler GNN architecture is guaranteed with the convergence to a stationary distribution. In the second part of this thesis, we will introduce a new optimal transport distance metric on graphs in a regularized learning framework for graph kernels. This optimal transport distance metric can preserve both local and global structures between graphs during the transport, in addition to preserving features and their local variations. Furthermore, we propose two strongly convex regularization terms to theoretically guarantee the convergence and numerical stability in finding an optimal assignment between graphs. One regularization term is used to regularize a Wasserstein distance between graphs in the same ground space. This helps to preserve the local clustering structure on graphs by relaxing the optimal transport problem to be a cluster-to-cluster assignment between locally connected vertices. The other regularization term is used to regularize a Gromov-Wasserstein distance between graphs across different ground spaces based on degree-entropy KL divergence. This helps to improve the matching robustness of an optimal alignment to preserve the global connectivity structure of graphs. We have evaluated our optimal transport-based graph kernel using different benchmark tasks. The experimental results show that our models considerably outperform all the state-of-the-art methods in all benchmark tasks

    Advanced Condition Monitoring of Complex Mechatronics Systems Based on Model-of-Signals and Machine Learning Techniques

    Get PDF
    Prognostics and Health Management (PHM) of machinery has become one of the pillars of Industry 4.0. The introduction of emerging technologies into the industrial world enables new models, new forms, and new methodologies to transform traditional manufacturing into intelligent manufacturing. In this context, diagnostics and prognostics of faults and their precursors has gained remarkable attention, mainly when performed autonomously by systems. The field is flourishing in academia, and researchers have published numerous PHM methodologies for machinery components. The typical course of actions adopted to execute servicing strategies on machinery components requires significant sensor measurements, suitable data processing algorithms, and appropriate servicing choices. Even though the industrial world is integrating more and more Information Technology solutions to keep up with Industry 4.0 new trends most of the proposed solutions do not consider standard industrial hardware and software. Modern controllers are built based on PCs and workstations hardware architectures, introducing more computational power and resources in production lines that we can take advantage of. This thesis focuses on bridging the gap in PHM between the industry and the research field, starting from Condition Monitoring and its application using modern industrial hardware. The cornerstones of this "bridge" are Model-of-Signals (MoS) and Machine Learning techniques. MoS relies on sensor measurements to estimate machine working condition models. Those models are the result of black-box system identification theory, which provides essential rules and guidelines to calculate them properly. MoS allows the integration of PHM modules into machine controllers, exploiting their edge-computing capabilities, because of the availability of recursive estimation algorithms. Besides, Machine Learning offers the tools to perform a further refinement of the extracted information, refining data for diagnostics, prognostics, and maintenance decision-making, and we show how its integration is possible within the modern automation pyramid

    Engineering Education and Research Using MATLAB

    Get PDF
    MATLAB is a software package used primarily in the field of engineering for signal processing, numerical data analysis, modeling, programming, simulation, and computer graphic visualization. In the last few years, it has become widely accepted as an efficient tool, and, therefore, its use has significantly increased in scientific communities and academic institutions. This book consists of 20 chapters presenting research works using MATLAB tools. Chapters include techniques for programming and developing Graphical User Interfaces (GUIs), dynamic systems, electric machines, signal and image processing, power electronics, mixed signal circuits, genetic programming, digital watermarking, control systems, time-series regression modeling, and artificial neural networks
    corecore