6 research outputs found

    Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients

    Get PDF
    We address a new numerical method based on machine learning and in particular based on the concept of the so-called Extreme Learning Machines, to approximate the solution of linear elliptic partial differential equations with collocation. We show that a feedforward neural network with a single hidden layer and sigmoidal transfer functions and fixed, random, internal weights and biases can be used to compute accurately enough a collocated solution for such problems. We discuss how one can set the range of values for both the weights between the input and hidden layer and the biases of the hidden layer in order to obtain a good underlying approximating subspace, and we explore the required number of collocation points. We demonstrate the efficiency of the proposed method with several one-dimensional diffusion–advection–reaction benchmark problems that exhibit steep behaviors, such as boundary layers. We point out that there is no need of iterative training of the network, as the proposed numerical approach results to a linear problem that can be easily solved using least-squares and regularization. Numerical results show that the proposed machine learning method achieves a good numerical accuracy, outperforming central Finite Differences, thus bypassing the time-consuming training phase of other machine learning approaches

    Metodología de desarrollo de técnicas de agrupamiento de datos usando aprendizaje automático

    Get PDF
    Context: Today, the usage of large amounts of data acquired from various electronic, optical, or other measurement devices and equipment brings the problem of data analysis at the time of extracting the aimed information from the acquired samples. Where to correctly group the data is necessary to obtain relevant and accurate information to evidence the physical phenomenon that you want to address. Methodology: The work presents the development and evolution of a five-stage methodology for the development of a data grouping technique, using machine learning techniques and artificial intelligence. It consists of five phases called analysis, design, development, evaluation, and distribution, using open-source standards, and based on unified languages for the interpretation of software in engineering. Results: The validation of the methodology was developed through the creation of two data analysis methods, with an average execution time of 20 weeks, obtaining precision values 40% and 29% higher with the classic data grouping algorithms of k-means and fuzzy cmeans. Additionally, there is a massive experimentation methodology on automated unit tests, which managed to group, label, and validate 3.6 million samples accumulated in the total of 100 group runs of 900 samples in approximately 2 hours. Conclusions: Finally, with the results of the research was determined that the methodology intends to guide the systematic development in specific problems in quantitative databases, such as the channel parameters in a communication system or the segmentation of images using the RGB values of the pixels. Even when software is developed both hardware, the execution will be more versatile than in cases with theoretical applications.Contexto: Hoy en día, el uso de grandes cantidades de datos adquiridos desde diversos dispositivos y equipos electrónicos, ópticos u otra tecnología de medición, generan un problema de análisis de datos en el momento de extraer la información de interés desde las muestras adquiridas. En ellos, agrupar correctamente los datos es necesario para obtener información relevante y precisa para evidenciar el fenómeno físico que se desea abordar. Metodología: El trabajo presenta la evolución de una metodología de cinco etapas para el desarrollo de una técnica de agrupamiento de datos, a través de técnicas de aprendizaje automático e inteligencia artificial. Esta se compone de cinco fases denominadas análisis, diseño, desarrollo, evaluación y distribución, con estándares de código abierto y fundamentadas en los lenguajes unificados para la interpretación del software en ingeniería. Resultados: La validación de la metodología se ha desarrollado mediante la creación de dos métodos de análisis de datos, con un tiempo de ejecución promedio de 20 semanas, obteniendo valores de precisión 40 % y 29 % superiores con los algoritmos clásicos de agrupamiento de datos de k-means y fuzzy c-means. Adicionalmente, se encuentra una metodología de experimentación masiva sobre pruebas unitarias automatizadas, las cuales lograron agrupar, etiquetar y validar 3,6 millones de muestras, acumulado un total de 100 ejecuciones de grupos de 900 muestras, en aproximadamente 2 horas. Conclusiones: Con los resultados de la investigación se ha determinado que la metodología pretende orientar el desarrollo sistemático de técnicas de agrupamiento de datos, en problemas específicos para bases integradas por muestras con atributos cuantitativos, como los casos de parámetros de canal en un sistema de comunicaciones o la segmentación de imágenes usando los valoras RGB de los pixeles; incluso, cuando se desarrolla software y hardware, la ejecución será más versátil que en casos con aplicaciones teóricas

    Field programmable gate array based sigmoid function implementation using differential lookup table and second order nonlinear function

    Get PDF
    Artificial neural network (ANN) is an established artificial intelligence technique that is widely used for solving numerous problems such as classification and clustering in various fields. However, the major problem with ANN is a factor of time. ANN takes a longer time to execute a huge number of neurons. In order to overcome this, ANN is implemented into hardware namely field-programmable-gate-array (FPGA). However, implementing the ANN into a field-programmable gate array (FPGA) has led to a new problem related to the sigmoid function implementation. Often used as the activation function for ANN, a sigmoid function cannot be directly implemented in FPGA. Owing to its accuracy, the lookup table (LUT) has always been used to implement the sigmoid function in FPGA. In this case, obtaining the high accuracy of LUT is expensive particularly in terms of its memory requirements in FPGA. Second-order nonlinear function (SONF) is an appealing replacement for LUT due to its small memory requirement. Although there is a trade-off between accuracy and memory size. Taking the advantage of the aforementioned approaches, this thesis proposed a combination of SONF and a modified LUT namely differential lookup table (dLUT). The deviation values between SONF and sigmoid function are used to create the dLUT. SONF is used as the first step to approximate the sigmoid function. Then it is followed by adding or deducting with the value that has been stored in the dLUT as a second step as demonstrated via simulation. This combination has successfully reduced the deviation value. The reduction value is significant as compared to previous implementations such as SONF, and LUT itself. Further simulation has been carried out to evaluate the accuracy of the ANN in detecting the object in an indoor environment by using the proposed method as a sigmoid function. The result has proven that the proposed method has produced the output almost as accurately as software implementation in detecting the target in indoor positioning problems. Therefore, the proposed method can be applied in any field that demands higher processing and high accuracy in sigmoid function outpu

    Deep Learning Paradigm for Cardiovascular Disease/Stroke Risk Stratification in Parkinson’s Disease Affected by COVID‐19: A Narrative Review

    Get PDF
    Background and Motivation: Parkinson’s disease (PD) is one of the most serious, non-curable, and expensive to treat. Recently, machine learning (ML) has shown to be able to predict cardiovascular/stroke risk in PD patients. The presence of COVID‐19 causes the ML systems to be-come severely non‐linear and poses challenges in cardiovascular/stroke risk stratification. Further, due to comorbidity, sample size constraints, and poor scientific and clinical validation techniques, there have been no well‐explained ML paradigms. Deep neural networks are powerful learning machines that generalize non‐linear conditions. This study presents a novel investigation of deep learning (DL) solutions for CVD/stroke risk prediction in PD patients affected by the COVID‐19 framework. Method: The PRISMA search strategy was used for the selection of 292 studies closely associated with the effect of PD on CVD risk in the COVID‐19 framework. We study the hypothesis that PD in the presence of COVID‐19 can cause more harm to the heart and brain than in non‐ COVID‐19 conditions. COVID‐19 lung damage severity can be used as a covariate during DL training model designs. We, therefore, propose a DL model for the estimation of, (i) COVID‐19 lesions in computed tomography (CT) scans and (ii) combining the covariates of PD, COVID‐19 lesions, office and laboratory arterial atherosclerotic image‐based biomarkers, and medicine usage for the PD patients for the design of DL point‐based models for CVD/stroke risk stratification. Results: We validated the feasibility of CVD/stroke risk stratification in PD patients in the presence of a COVID‐ 19 environment and this was also verified. DL architectures like long short‐term memory (LSTM), and recurrent neural network (RNN) were studied for CVD/stroke risk stratification showing powerful designs. Lastly, we examined the artificial intelligence bias and provided recommendations for early detection of CVD/stroke in PD patients in the presence of COVID‐19. Conclusion: The DL is a very powerful tool for predicting CVD/stroke risk in PD patients affected by COVID‐19. © 2022 by the authors. Licensee MDPI, Basel, Switzerland

    Deep Learning Paradigm for Cardiovascular Disease/Stroke Risk Stratification in Parkinson’s Disease Affected by COVID‐19: A Narrative Review

    Get PDF
    Background and Motivation: Parkinson’s disease (PD) is one of the most serious, non-curable, and expensive to treat. Recently, machine learning (ML) has shown to be able to predict cardiovascular/stroke risk in PD patients. The presence of COVID‐19 causes the ML systems to be-come severely non‐linear and poses challenges in cardiovascular/stroke risk stratification. Further, due to comorbidity, sample size constraints, and poor scientific and clinical validation techniques, there have been no well‐explained ML paradigms. Deep neural networks are powerful learning machines that generalize non‐linear conditions. This study presents a novel investigation of deep learning (DL) solutions for CVD/stroke risk prediction in PD patients affected by the COVID‐19 framework. Method: The PRISMA search strategy was used for the selection of 292 studies closely associated with the effect of PD on CVD risk in the COVID‐19 framework. We study the hypothesis that PD in the presence of COVID‐19 can cause more harm to the heart and brain than in non‐ COVID‐19 conditions. COVID‐19 lung damage severity can be used as a covariate during DL training model designs. We, therefore, propose a DL model for the estimation of, (i) COVID‐19 lesions in computed tomography (CT) scans and (ii) combining the covariates of PD, COVID‐19 lesions, office and laboratory arterial atherosclerotic image‐based biomarkers, and medicine usage for the PD patients for the design of DL point‐based models for CVD/stroke risk stratification. Results: We validated the feasibility of CVD/stroke risk stratification in PD patients in the presence of a COVID‐ 19 environment and this was also verified. DL architectures like long short‐term memory (LSTM), and recurrent neural network (RNN) were studied for CVD/stroke risk stratification showing powerful designs. Lastly, we examined the artificial intelligence bias and provided recommendations for early detection of CVD/stroke in PD patients in the presence of COVID‐19. Conclusion: The DL is a very powerful tool for predicting CVD/stroke risk in PD patients affected by COVID‐19
    corecore