50 research outputs found

    Feasibility of using Group Method of Data Handling (GMDH) approach for horizontal coordinate transformation

    Get PDF
    Machine learning algorithms have emerged as a new paradigm shift in geoscience computations and applications. The present study aims to assess the suitability of Group Method of Data Handling (GMDH) in coordinate transformation. The data used for the coordinate transformation constitute the Ghana national triangulation network which is based on the two-horizontal geodetic datums (Accra 1929 and Leigon 1977) utilised for geospatial applications in Ghana. The GMDH result was compared with other standard methods such as Backpropagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), 2D conformal, and 2D affine. It was observed that the proposed GMDH approach is very efficient in transforming coordinates from the Leigon 1977 datum to the official mapping datum of Ghana, i.e. Accra 1929 datum. It was also found that GMDH could produce comparable and satisfactory results just like the widely used BPNN and RBFNN. However, the classical transformation methods (2D affine and 2D conformal) performed poorly when compared with the machine learning models (GMDH, BPNN and RBFNN). The computational strength of the machine learning models’ is attributed to its self-adaptive capability to detect patterns in data set without considering the existence of functional relationships between the input and output variables. To this end, the proposed GMDH model could be used as a supplementary computational tool to the existing transformation procedures used in the Ghana geodetic reference network

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Aprendizaje automático aplicado al modelado de viento y olas

    Get PDF
    Trabajo de Fin de Grado en Ingeniería del Software, Facultad de Informática UCM, Departamento de Arquitectura de Computadores y Automática, Curso 2019/2020In the fight against climate change, Offshore wind energy is at the forefront, in the development phase. The problem with turbines anchored to the seabed lies in the enormous cost of installation and maintenance, leading to the theoretical approach of floating offshore wind turbines. However, floating turbines are exposed to new wave loads and stronger wind loads. To enable their implementation while maximizing the electricity production and ensuring the protection of the structure, more accurate predictive models than the physical and statistical ones found in the literature are needed for the metocean (meteorological and oceanographic) variables involved. This project aims to model the wind speed in the time domain, the significant waves height in the frequency domain and the misalignment between wind and waves direction in the time domain, applying Machine Learning techniques. Offshore data collection as well as an exploratory data analysis and data cleaning phases have been carried out. Subsequently, the following algorithms were applied to train the models: Linear Regression, Support Vector Machines for Regression, Gaussian Process Regression and Neural Networks. Nonlinear Autoregressive with exogenous input neural networks (NARX) have proved to be the best algorithm both for wind speed and misalignment forecasting and the most accurate predictive model for significant waves height prediction has been the Gaussian Process Regression (GPR). In this project we demonstrated the ability of Machine Learning algorithms to model wind variables of a stochastic nature and waves. We emphasize the importance of evaluating the models through techniques such as Learning Curves to make better decisions to optimize them. This work not only makes predictive models available for later use, but it is also a pioneer in misalignment modelling, leaving a door open for future research.En la lucha contra el cambio climático, la energía eólica marina se sitúa en cabeza encontrándose en fase de desarrollo. El problema de las turbinas ancladas al lecho marino reside en el enorme coste de instalación y mantenimiento, llevando al planteamiento teórico de turbinas eólicas marinas flotantes. Estas, sin embargo, están expuestas a nuevas cargas de olas y cargas de viento más fuertes. Para hacer posible su implantación maximizando la producción eléctrica a la vez que asegurando la protección de la estructura, se necesita disponer de modelos predictivos más precisos que los físicos y estadísticos de la literatura para las variables metoceánicas (meteorológicas y oceánicas) implicadas. El objetivo de este proyecto es modelar la velocidad del viento en el dominio del tiempo, la altura significativa de la ola en el dominio de la frecuencia y la desalineación entre la dirección del viento y de las olas en el dominio temporal, aplicando técnicas de Aprendizaje Automático. Se ha llevado a cabo una fase de recopilación de datos medidos en alta mar, así como el análisis exploratorio y limpieza de los mismos. Posteriormente, para el entrenamiento de los modelos se aplicaron los algoritmos: Regresión Lineal, Máquinas de Vectores Soporte para Regresión, Proceso de Regresión Gausiano y Redes Neuronales. Las redes neuronales autorregresivas no lineales con entrada externa (NARX) han resultado ser el mejor algoritmo tanto para la predicción de la velocidad del viento como para la desalineación y para la altura significativa de la ola el modelo predictivo más preciso ha sido el proceso regresivo gausiano (GPR). En este proyecto demostramos la capacidad de los algoritmos de Aprendizaje Automático para modelar las variables del viento de naturaleza estocástica y del oleaje. Destacamos la importancia de la evaluación de los modelos mediante técnicas como las Curvas de Aprendizaje para tomar mejores decisiones en la optimización de los mismos. Este trabajo no pone solo a disposición modelos predictivos para su posterior uso, además es pionero en el modelado de la desalineación dejando una puerta abierta a futuras investigaciones.Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Koneoppimiskehys petrokemianteollisuuden sovelluksille

    Get PDF
    Machine learning has many potentially useful applications in process industry, for example in process monitoring and control. Continuously accumulating process data and the recent development in software and hardware that enable more advanced machine learning, are fulfilling the prerequisites of developing and deploying process automation integrated machine learning applications which improve existing functionalities or even implement artificial intelligence. In this master's thesis, a framework is designed and implemented on a proof-of-concept level, to enable easy acquisition of process data to be used with modern machine learning libraries, and to also enable scalable online deployment of the trained models. The literature part of the thesis concentrates on studying the current state and approaches for digital advisory systems for process operators, as a potential application to be developed on the machine learning framework. The literature study shows that the approaches for process operators' decision support tools have shifted from rule-based and knowledge-based methods to machine learning. However, no standard methods can be concluded, and most of the use cases are quite application-specific. In the developed machine learning framework, both commercial software and open source components with permissive licenses are used. Data is acquired over OPC UA and then processed in Python, which is currently almost the de facto standard language in data analytics. Microservice architecture with containerization is used in the online deployment, and in a qualitative evaluation, it proved to be a versatile and functional solution.Koneoppimisella voidaan osoittaa olevan useita hyödyllisiä käyttökohteita prosessiteollisuudessa, esimerkiksi prosessinohjaukseen liittyvissä sovelluksissa. Jatkuvasti kerääntyvä prosessidata ja toisaalta koneoppimiseen soveltuvien ohjelmistojen sekä myös laitteistojen viimeaikainen kehitys johtavat tilanteeseen, jossa prosessiautomaatioon liitettyjen koneoppimissovellusten avulla on mahdollista parantaa nykyisiä toiminnallisuuksia tai jopa toteuttaa tekoälysovelluksia. Tässä diplomityössä suunniteltiin ja toteutettiin prototyypin tasolla koneoppimiskehys, jonka avulla on helppo käyttää prosessidataa yhdessä nykyaikaisten koneoppimiskirjastojen kanssa. Kehys mahdollistaa myös koneopittujen mallien skaalautuvan käyttöönoton. Diplomityön kirjallisuusosa keskittyy prosessioperaattoreille tarkoitettujen digitaalisten avustajajärjestelmien nykytilaan ja toteutustapoihin, avustajajärjestelmän tai sen päätöstukijärjestelmän ollessa yksi mahdollinen koneoppimiskehyksen päälle rakennettava ohjelma. Kirjallisuustutkimuksen mukaan prosessioperaattorin päätöstukijärjestelmien taustalla olevat menetelmät ovat yhä useammin koneoppimiseen perustuvia, aiempien sääntö- ja tietämyskantoihin perustuvien menetelmien sijasta. Selkeitä yhdenmukaisia lähestymistapoja ei kuitenkaan ole helposti pääteltävissä kirjallisuuden perusteella. Lisäksi useimmat tapausesimerkit ovat sovellettavissa vain kyseisissä erikoistapauksissa. Kehitetyssä koneoppimiskehyksessä on käytetty sekä kaupallisia että avoimen lähdekoodin komponentteja. Prosessidata haetaan OPC UA -protokollan avulla, ja sitä on mahdollista käsitellä Python-kielellä, josta on muodostunut lähes de facto -standardi data-analytiikassa. Kehyksen käyttöönottokomponentit perustuvat mikropalveluarkkitehtuuriin ja konttiteknologiaan, jotka osoittautuivat laadullisessa testauksessa monipuoliseksi ja toimivaksi toteutustavaksi

    WEIGH-IN-MOTION DATA-DRIVEN PAVEMENT PERFORMANCE PREDICTION MODELS

    Get PDF
    The effective functioning of pavements as a critical component of the transportation system necessitates the implementation of ongoing maintenance programs to safeguard this significant and valuable infrastructure and guarantee its optimal performance. The maintenance, rehabilitation, and reconstruction (MRR) program of the pavement structure is dependent on a multidimensional decision-making process, which considers the existing pavement structural condition and the anticipated future performance. Pavement Performance Prediction Models (PPPMs) have become indispensable tools for the efficient implementation of the MRR program and the minimization of associated costs by providing precise predictions of distress and roughness based on inventory and monitoring data concerning the pavement structure\u27s state, traffic load, and climatic conditions. The integration of PPPMs has become a vital component of Pavement Management Systems (PMSs), facilitating the optimization, prioritization, scheduling, and selection of maintenance strategies. Researchers have developed several PPPMs with differing objectives, and each PPPM has demonstrated distinct strengths and weaknesses regarding its applicability, implementation process, and data requirements for development. Traditional statistical models, such as linear regression, are inadequate in handling complex nonlinear relationships between variables and often generate less precise results. Machine Learning (ML)-based models have become increasingly popular due to their ability to manage vast amounts of data and identify meaningful relationships between them to generate informative insights for better predictions. To create ML models for pavement performance prediction, it is necessary to gather a significant amount of historical data on pavement and traffic loading conditions. The Long-Term Pavement Performance Program (LTPP) initiated by the Federal Highway Administration (FHWA) offers a comprehensive repository of data on the environment, traffic, inventory, monitoring, maintenance, and rehabilitation works that can be utilized to develop PPPMs. The LTPP also includes Weigh-In-Motion (WIM) data that provides information on traffic, such as truck traffic, total traffic, directional distribution, and the number of different axle types of vehicles. High-quality traffic loading data can play an essential role in improving the performance of PPPMs, as the Mechanistic-Empirical Pavement Design Guide (MEPDG) considers vehicle types and axle load characteristics to be critical inputs for pavement design. The collection of high-quality traffic loading data has been a challenge in developing Pavement Performance Prediction Models (PPPMs). The Weigh-In-Motion (WIM) system, which comprises WIM scales, has emerged as an innovative solution to address this issue. By leveraging computer vision and machine learning techniques, WIM systems can collect accurate data on vehicle type and axle load characteristics, which are critical factors affecting the performance of flexible pavements. Excessive dynamic loading caused by heavy vehicles can result in the early disintegration of the pavement structure. The Long-Term Pavement Performance Program (LTPP) provides an extensive repository of WIM data that can be utilized to develop accurate PPPMs for predicting pavement future behavior and tolerance. The incorporation of comprehensive WIM data collected from LTPP has the potential to significantly improve the accuracy and effectiveness of PPPMs. To develop artificial neural network (ANN) based pavement performance prediction models (PPPMs) for seven distinct performance indicators, including IRI, longitudinal crack, transverse crack, fatigue crack, potholes, polished aggregate, and patch failure, a total of 300 pavement sections with WIM data were selected from the United States of America. Data collection spanned 20 years, from 2001 to 2020, and included information on pavement age, material properties, climatic properties, structural properties, and traffic-related characteristics. The primary dataset was then divided into two distinct subsets: one which included WIMgenerated traffic data and another which excluded WIM-generated traffic data. Data cleaning and normalization were meticulously performed using the Z-score normalization method. Each subset was further divided into two separate groups: the first containing 15 years of data for model training and the latter containing 5 years of data for testing purposes. Principal Component Analysis (PCA) was then employed to reduce the number of input variables for the model. Based on a cumulative Proportion of Variation (PoV) of 96%, 12 input variables were selected. Subsequently, a single hidden layer ANN model with 12 neurons was generated for each performance indicator. The study\u27s results indicate that incorporating Weigh-In-Motion (WIM)-generated traffic loading data can significantly enhance the accuracy and efficacy of pavement performance prediction models (PPPMs). This improvement further supports the suitability of optimized pavement maintenance scheduling with minimal costs, while also ensuring timely repairs to promote acceptable serviceability and structural stability of the pavement. The contributions of this research are twofold: first, it provides an enhanced understanding of the positive impacts that high-quality traffic loading data has on pavement conditions; and second, it explores potential applications of WIM data within the Pavement Management System (PMS)

    Performance assessment and optimisation of a novel guideless irregular dew point cooler using artificial intelligence

    Get PDF
    Air Conditioners (ACs) are a vital need in modern buildings to provide comfortable indoor air for the occupants. Several alternatives for the traditional coolers are introduced to improve the cooling efficiency but among them, Evaporative Coolers (ECs) absorbed more attention owing to their intelligible structure and high efficiency. ECs are categorized into two types, i.e., Direct Evaporative Coolers (DECs) and Indirect Evaporative Coolers (IECs). Continuous endeavours in the improvement of the ECs resulted in development of Dew Point Coolers (DPCs) which enable the supply air to reach the dew point temperature. The main innovation of DPCs relies on invention of a M-cycle Heat and Mass Exchanger (HMX) which contributes towards improvement of the ECs’ efficiency by up to 30%. A state-of-the-art counter flow DPC in which the flat plates in traditional HMXs are replaced by the corrugated plates is called Guideless Irregular DPC (GIDPC). This technology has 30-60% more cooling efficiency compared to the flat plate HMX in traditional DPCs.Owing to the empirical success of the Artificial Intelligence (AI) in different fields and enhanced importance of Machine Learning (ML) models, this study pioneers in developing two ML models using Multiple Polynomial Regression (MPR), and Deep Neural Network (DNN) methods, and three Multi Objective Evolutionary Optimisation (MOEO) models using Genetic Algorithms (GA), Particle Swarm Optimisation (PSO), and a novel bio-inspired algorithm, i.e., Slime Mould Algorithm (SMA), for the performance prediction and optimisation of the GIDPC in all possible operating climates. Furthermore, this study pioneers in developing an explainable and interpretable DNN model for the GIDPC. To this end, a game theory-based SHapley Additive exPlanations (SHAP) method is used to interpret contribution of the operating conditions on performance parameters.The ML models, take the intake air characteristic as well as main operating and design parameters of the HMX as inputs of the ML models to predict the GIDPC’s performance parameters, e.g., cooling capacity, coefficient of performance (COP), thermal efficiencies. The results revealed that both models have high prediction accuracies where MPR performs with a maximum average error of 1.22%. In addition, the Mean Square Error (MSE) of the selected DNN model is only 0.04. The objectives of the MOEO models are to simultaneously maximise the cooling efficiency and minimise the construction cost of the GIDPC by determining the optimum values of the selected decision variables.The performance of the optimised GIDPCs is compared in a deterministic way in which the comparisons are carried out in diverse climates in 2020 and 2050 in which the hourly future weather data are projected using a high-emission scenario defined by Intergovernmental Panel for Climate Change (IPCC). The results revealed that the hourly COP of the optimised systems outperforms the base design. Moreover, although power consumption of all systems increases from 2020 to 2050, owing to more operating hours as a result of global warming, but power savings of up to 72%, 69.49%, 63.24%, and 69.21% in hot summer continental, arid, tropical rainforest and Mediterranean hot summer climates respectively, can be achieved compared to the base system when the systems run optimally

    A Comprehensive Survey on Rare Event Prediction

    Full text link
    Rare event prediction involves identifying and forecasting events with a low probability using machine learning and data analysis. Due to the imbalanced data distributions, where the frequency of common events vastly outweighs that of rare events, it requires using specialized methods within each step of the machine learning pipeline, i.e., from data processing to algorithms to evaluation protocols. Predicting the occurrences of rare events is important for real-world applications, such as Industry 4.0, and is an active research area in statistical and machine learning. This paper comprehensively reviews the current approaches for rare event prediction along four dimensions: rare event data, data processing, algorithmic approaches, and evaluation approaches. Specifically, we consider 73 datasets from different modalities (i.e., numerical, image, text, and audio), four major categories of data processing, five major algorithmic groupings, and two broader evaluation approaches. This paper aims to identify gaps in the current literature and highlight the challenges of predicting rare events. It also suggests potential research directions, which can help guide practitioners and researchers.Comment: 44 page

    The 8th International Conference on Time Series and Forecasting

    Get PDF
    The aim of ITISE 2022 is to create a friendly environment that could lead to the establishment or strengthening of scientific collaborations and exchanges among attendees. Therefore, ITISE 2022 is soliciting high-quality original research papers (including significant works-in-progress) on any aspect time series analysis and forecasting, in order to motivating the generation and use of new knowledge, computational techniques and methods on forecasting in a wide range of fields
    corecore