58 research outputs found

    Ансамбль мереж GRNN для розв'язання задач регресії з підвищеною точністю

    Get PDF
    The effective solution of regression problems is an important task for e-commerce, medicine, business analytics and for many other industries. In recent years, the demand to use artificial intelligence methods for solving regression problems has been rapidly increased. This can be explained by the need to work with large datasets, or the complex interrelationships between multiple independent variables. General regression neural network is one of the options for solving this problem. However, the use of this computational intelligence method does not provide high accuracy of the result, which imposes a number of limitations. The new method based on a general regression neural network ensemble for increasing prediction task accuracy is developed. The main advantages and disadvantages of neural networks of this type are described in detail. A brief description of the operation of the general regression neural network is given. An algorithmic implementation of the developed ensemble is provided. An increased prediction accuracy using developed ensemble has been received. A software solution for the implementation of the described method with use libraries of Python programming language is developed. Experimental modeling of the method is conducted on real data of the regression problem. High efficiency of solving the problem is established using the developed method on the basis of both the mean absolute error in percentage and using the standard error. The method is compared with the existing ones: the Wiener polynomial approximation based on Stochastic Gradient Descent, the general regression neural network, and the modified AdaBoost algorithm. The highest accuracy of the solution of the problem by the developed method is proved experimentally based on both indicators of accuracy among all the methods considered in the work. In particular, it provides accuracy more than 3.4 %, 4.3 % and 8.3 % (MAPE) compared to existing methods. The method developed can be used to obtain high-precision solutions for solving applications of e-commerce, medicine, materials science, business analytics and others. The plan for further researches is to develop a hybrid high-speed computational intelligence system based on the combination of the developed method and the successive geometric transformations model (SGTM).Розроблено метод ансамблювання нейронних мереж узагальненої регресії для підвищення точності розв'язання задачі прогнозування. Описано базові положення функціонування нейронної мережі узагальненої регресії. На основі цього подано алгоритмічну реалізацію розробленого ансамблю. Аналітично доведено можливість підвищення точності прогнозу із використанням розробленого ансамблю. Із використанням бібліотек мови Python, розроблено програмне рішення для реалізації описаного методу. Проведено експериментальне моделювання роботи методу на реальних даних задачі регресії. Встановлено високу ефективність розв'язання поставленої задачі із застосуванням розробленого методу на основі як середньої абсолютної похибки у відсотках, так і з використанням середньоквадратичної похибки. Здійснено порівняння роботи методу із наявними: апроксимацією поліномом Вінера на основі Стохастичного Градієнтного спуску, нейронною мережею узагальненої регресії та модифікованим алгоритмом AdaBoost. Експериментальним шляхом доведено найвищу точність розв'язання поставленої задачі розробленим методом на основі обох показників точності серед усіх розглянутих у роботі методів. Зокрема, він забезпечує більш ніж на 3,4, 4,3 та 8,3 % (MAPE) вищу точність порівняно із наявними методами відповідно. Розроблений метод можна використовувати для отримання розв'язків підвищеної точності під час вирішення прикладних завдань електронної комерції, медицини, матеріалознавства, бізнес-аналітики та інших

    Missing health data pattern matching technique for continuous remote patient monitoring

    Get PDF
    Remote patient monitoring (RPM) has been gaining popularity recently. However, health data acquisition is a significant challenge associated with patient monitoring. In continuous RPM, health data acquisition may miss health data during transmission. Missing data compromises the quality and reliability of patient risk assessment. Several studies suggested techniques for analyzing missing data; however, many are unsuitable for RPM. These techniques neglect the variability of missing data and provide biased results with imputation. Therefore, a holistic approach must consider the correlation and variability of the various vitals and avoid biased imputation. This paper proposes a coherent computation pattern-matching technique to identify and predict missing data patterns. The performance of the proposed approach is evaluated using data collected from a field trial. Results show that the technique can effectively identify and predict missing patterns. © 2023, The Author(s)

    Smart models to improve agrometeorological estimations and predictions

    Get PDF
    La población mundial, en continuo crecimiento, alcanzará de forma estimada los 9,7 mil millones de habitantes en el 2050. Este incremento, combinado con el aumento en los estándares de vida y la situación de emergencia climática (aumento de la temperatura, intensificación del ciclo del agua, etc.) nos enfrentan al enorme desafío de gestionar de forma sostenible los cada vez más escasos recursos disponibles. El sector agrícola tiene que afrontar retos tan importantes como la mejora en la gestión de los recursos naturales, la reducción de la degradación medioambiental o la seguridad alimentaria y nutricional. Todo ello condicionado por la escasez de agua y las condiciones de aridez: factores limitantes en la producción de cultivos. Para garantizar una producción agrícola sostenible bajo estas condiciones, es necesario que todas las decisiones que se tomen estén basadas en el conocimiento, la innovación y la digitalización de la agricultura de forma que se garantice la resiliencia de los agroecosistemas, especialmente en entornos áridos, semi-áridos y secos sub-húmedos en los que el déficit de agua es estructural. Por todo esto, el presente trabajo se centra en la mejora de la precisión de los actuales modelos agrometeorológicos, aplicando técnicas de inteligencia artificial. Estos modelos pueden proporcionar estimaciones y predicciones precisas de variables clave como la precipitación, la radiación solar y la evapotranspiración de referencia. A partir de ellas, es posible favorecer estrategias agrícolas más sostenibles, gracias a la posibilidad de reducir el consumo de agua y energía, por ejemplo. Además, se han reducido el número de mediciones requeridas como parámetros de entrada para estos modelos, haciéndolos más accesibles y aplicables en áreas rurales y países en desarrollo que no pueden permitirse el alto costo de la instalación, calibración y mantenimiento de estaciones meteorológicas automáticas completas. Este enfoque puede ayudar a proporcionar información valiosa a los técnicos, agricultores, gestores y responsables políticos de la planificación hídrica y agraria en zonas clave. Esta tesis doctoral ha desarrollado y validado nuevas metodologías basadas en inteligencia artificial que han ser vido para mejorar la precision de variables cruciales en al ámbito agrometeorológico: precipitación, radiación solar y evapotranspiración de referencia. En particular, se han modelado sistemas de predicción y rellenado de huecos de precipitación a diferentes escalas utilizando redes neuronales. También se han desarrollado modelos de estimación de radiación solar utilizando exclusivamente parámetros térmicos y validados en zonas con características climáticas similares a lugar de entrenamiento, sin necesidad de estar geográficamente en la misma región o país. Analógamente, se han desarrollado modelos de estimación y predicción de evapotranspiración de referencia a nivel local y regional utilizando también solamente datos de temperatura para todo el proceso: regionalización, entrenamiento y validación. Y finalmente, se ha creado una librería de Python de código abierto a nivel internacional (AgroML) que facilita el proceso de desarrollo y aplicación de modelos de inteligencia artificial, no solo enfocadas al sector agrometeorológico, sino también a cualquier modelo supervisado que mejore la toma de decisiones en otras áreas de interés.The world population, which is constantly growing, is estimated to reach 9.7 billion people in 2050. This increase, combined with the rise in living standards and the climate emergency situation (increase in temperature, intensification of the water cycle, etc.), presents us with the enormous challenge of managing increasingly scarce resources in a sustainable way. The agricultural sector must face important challenges such as improving natural resource management, reducing environmental degradation, and ensuring food and nutritional security. All of this is conditioned by water scarcity and aridity, limiting factors in crop production. To guarantee sustainable agricultural production under these conditions, it is necessary to based all the decision made on knowledge, innovation, and the digitization of agriculture to ensure the resilience of agroecosystems, especially in arid, semi-arid, and sub-humid dry environments where water deficit is structural. Therefore, this work focuses on improving the precision of current agrometeorological models by applying artificial intelligence techniques. These models can provide accurate estimates and predictions of key variables such as precipitation, solar radiation, and reference evapotranspiration. This way, it is possible to promote more sustainable agricultural strategies by reducing water and energy consumption, for example. In addition, the number of measurements required as input parameters for these models has been reduced, making them more accessible and applicable in rural areas and developing countries that cannot afford the high cost of installing, calibrating, and maintaining complete automatic weather stations. This approach can help provide valuable information to technicians, farmers, managers, and policy makers in key wáter and agricultural planning areas. This doctoral thesis has developed and validated new methodologies based on artificial intelligence that have been used to improve the precision of crucial variables in the agrometeorological field: precipitation, solar radiation, and reference evapotranspiration. Specifically, prediction systems and gap-filling models for precipitation at different scales have been modeled using neural networks. Models for estimating solar radiation using only thermal parameters have also been developed and validated in areas with similar climatic characteristics to the training location, without the need to be geographically in the same region or country. Similarly, models for estimating and predicting reference evapotranspiration at the local and regional level have been developed using only temperature data for the entire process: regionalization, training, and validation. Finally, an internationally open-source Python library (AgroML) has been created to facilitate the development and application of artificial intelligence models, not only focused on the agrometeorological sector but also on any supervised model that improves decision-making in other areas of interest

    Embedded Data Imputation for Environmental Intelligent Sensing: A Case Study

    Get PDF
    Recent developments in cloud computing and the Internet of Things have enabled smart environments, in terms of both monitoring and actuation. Unfortunately, this often results in unsustainable cloud-based solutions, whereby, in the interest of simplicity, a wealth of raw (unprocessed) data are pushed from sensor nodes to the cloud. Herein, we advocate the use of machine learning at sensor nodes to perform essential data-cleaning operations, to avoid the transmission of corrupted (often unusable) data to the cloud. Starting from a public pollution dataset, we investigate how two machine learning techniques (kNN and missForest) may be embedded on Raspberry Pi to perform data imputation, without impacting the data collection process. Our experimental results demonstrate the accuracy and computational efficiency of edge-learning methods for filling in missing data values in corrupted data series. We find that kNN and missForest correctly impute up to 40% of randomly distributed missing values, with a density distribution of values that is indistinguishable from the benchmark. We also show a trade-off analysis for the case of bursty missing values, with recoverable blocks of up to 100 samples. Computation times are shorter than sampling periods, allowing for data imputation at the edge in a timely manner.Our work is supported by the Open Access Publishing Fund of the Free University of Bozen-Bolzano

    A Comparison of Feature Selection and Forecasting Machine Learning Algorithms for Predicting Glycaemia in Type 1 Diabetes Mellitus

    Get PDF
    Type 1 diabetes mellitus (DM1) is a metabolic disease derived from falls in pancreatic insulin production resulting in chronic hyperglycemia. DM1 subjects usually have to undertake a number of assessments of blood glucose levels every day, employing capillary glucometers for the monitoring of blood glucose dynamics. In recent years, advances in technology have allowed for the creation of revolutionary biosensors and continuous glucose monitoring (CGM) techniques. This has enabled the monitoring of a subject’s blood glucose level in real time. On the other hand, few attempts have been made to apply machine learning techniques to predicting glycaemia levels, but dealing with a database containing such a high level of variables is problematic. In this sense, to the best of the authors’ knowledge, the issues of proper feature selection (FS)—the stage before applying predictive algorithms—have not been subject to in-depth discussion and comparison in past research when it comes to forecasting glycaemia. Therefore, in order to assess how a proper FS stage could improve the accuracy of the glycaemia forecasted, this work has developed six FS techniques alongside four predictive algorithms, applying them to a full dataset of biomedical features related to glycaemia. These were harvested through a wide-ranging passive monitoring process involving 25 patients with DM1 in practical real-life scenarios. From the obtained results, we affirm that Random Forest (RF) as both predictive algorithm and FS strategy offers the best average performance (Root Median Square Error, RMSE = 18.54 mg/dL) throughout the 12 considered predictive horizons (up to 60 min in steps of 5 min), showing Support Vector Machines (SVM) to have the best accuracy as a forecasting algorithm when considering, in turn, the average of the six FS techniques applied (RMSE = 20.58 mg/dL)

    Machine Learning for Microcontroller-Class Hardware -- A Review

    Get PDF
    The advancements in machine learning opened a new opportunity to bring intelligence to the low-end Internet-of-Things nodes such as microcontrollers. Conventional machine learning deployment has high memory and compute footprint hindering their direct deployment on ultra resource-constrained microcontrollers. This paper highlights the unique requirements of enabling onboard machine learning for microcontroller class devices. Researchers use a specialized model development workflow for resource-limited applications to ensure the compute and latency budget is within the device limits while still maintaining the desired performance. We characterize a closed-loop widely applicable workflow of machine learning model development for microcontroller class devices and show that several classes of applications adopt a specific instance of it. We present both qualitative and numerical insights into different stages of model development by showcasing several use cases. Finally, we identify the open research challenges and unsolved questions demanding careful considerations moving forward.Comment: Accepted for publication at IEEE Sensors Journa

    Algorithm development on the use of feedback signals in the context of gasoline HCCI combustion

    Get PDF
    Homogeneous Charge Compression Ignition (HCCI) combustion is a promising research subject due to its characteristics of high efficiency and low emissions. These are highly desirable, given the global picture of increased energy requirements coupled with serious environmental implications. However, one of the main considerations of HCCI implementation is its control strategies which are not straightforward as in conventional Spark Ignition (SI) or Compression Ignition (Cl) engines. In order for closed loop control strategies to be successful, appropriate signals must be selected. In this research, experimental in-cylinder signals have been collected for pressure and ion current. These have been processed and evaluated as regards their suitability for HCCI control. During this process, physical based models have been developed both for treating experimental data as well as simulating theoretical cases. Using these tools, the behaviour of unstable HCCI operation has also been explored

    AI-big data analytics for building automation and management systems: a survey, actual challenges and future perspectives

    Get PDF
    In theory, building automation and management systems (BAMSs) can provide all the components and functionalities required for analyzing and operating buildings. However, in reality, these systems can only ensure the control of heating ventilation and air conditioning system systems. Therefore, many other tasks are left to the operator, e.g. evaluating buildings’ performance, detecting abnormal energy consumption, identifying the changes needed to improve efficiency, ensuring the security and privacy of end-users, etc. To that end, there has been a movement for developing artificial intelligence (AI) big data analytic tools as they offer various new and tailor-made solutions that are incredibly appropriate for practical buildings’ management. Typically, they can help the operator in (i) analyzing the tons of connected equipment data; and; (ii) making intelligent, efficient, and on-time decisions to improve the buildings’ performance. This paper presents a comprehensive systematic survey on using AI-big data analytics in BAMSs. It covers various AI-based tasks, e.g. load forecasting, water management, indoor environmental quality monitoring, occupancy detection, etc. The first part of this paper adopts a well-designed taxonomy to overview existing frameworks. A comprehensive review is conducted about different aspects, including the learning process, building environment, computing platforms, and application scenario. Moving on, a critical discussion is performed to identify current challenges. The second part aims at providing the reader with insights into the real-world application of AI-big data analytics. Thus, three case studies that demonstrate the use of AI-big data analytics in BAMSs are presented, focusing on energy anomaly detection in residential and office buildings and energy and performance optimization in sports facilities. Lastly, future directions and valuable recommendations are identified to improve the performance and reliability of BAMSs in intelligent buildings
    corecore