315 research outputs found

    Cellular Simultanous Recurrent Networks for Image Processing

    Get PDF
    Artificial neural networks are inspired by the abilities of humans and animals to learn and adapt. Feed-forward networks are both fast and powerful, and are particularly useful for statistical pattern recognition. These networks are inspired by portions of the brain such as the visual cortex. However, feed-forward networks have been shown inadequate for complex applications such as long-term optimization, reinforced learning and image processing. Cellular Neural Networks (CNNs) are a type of recurrent network which have been used extensively for image processing. CNNs have shown limited success solving problems which involve topological relationships. Such problems include geometric transformations such as affine transformation and image registration. The Cellular Simultaneous Recurrent Network (CSRN) has been exploited to solve the 2D maze traversal problem, which is a long-term optimization problem with similar topological relations. From its inception, it has been speculated that the CSRN may have important implications in image processing. However, to date, very little work has been done to study CSRNs for image processing tasks. In this work, we investigate CSRNs for image processing. We propose a novel, generalized architecture for the CSRN suitable for generic image processing tasks. This architecture includes the use of sub-image processing which greatly improves the efficacy of CSRNs for image processing. We demonstrate the application of the CSRN with this generalized architecture across a variety of image processing problems including pixel level transformations, filtering, and geometric transformations. Results are evaluated and compared with standard MATLAB® functions. To better understand the inner workings of the CSRN we investigate the use of various CSRN cores including: 1) the original Generalized Multi-Layered Perceptron (GMLP) core used by Pang and Werbos to solve the 2D maze traversal problem, 2) the Elman Simultaneous Recurrent Network (ESRN), and 3) a novel ESRN core with multi-layered feedback. We compare the functionality of these cores in image processing applications. Further, we introduce the application of the unscented Kalman filter (UKF) for training of the CSRN. Results are compared with the standard Extended Kalman Filter (EKF) training method of CSRN. Finally, implications of current findings and proposed research directions are presented

    Application of neural networks to model double tube heat exchangers

    Full text link
    Treballs Finals de Grau d'Enginyeria Química, Facultat de Química, Universitat de Barcelona, Curs: 2022-2023, Tutor: David Curcó CantarellArtificial Intelligence is experiencing dramatic growth in recent times. AI models such as ChatGPT have become controversial topics as they continously transform our world. Nevertheless, the true nature of AI is still widely not yet understood by society. Nowadays, Artificial Intelligence is still seen by many as an obscure and foreign concept, even mysterious and threatening. However, this couldn’t be further from the truth. At their essence, they are just mathematical tools which rely on centuries-old knowledge: algebra and calculus. In this project, a neural network model has been created to solve a chemical engineering problem, the predictive model of a double tube heat exchanger. This model is a neural network that predicts future system outputs (inner stream output temperature) from the past values of the input variables of the system (inner and outer streams input temperatures and outer stream flow rate). The data used to train the model was obtained in a simulation written in the Python programming language. Afterwards, the optimal design parameters of the neural network were found experimentally by training different models and testing their performance. This was done in three stages: a proof of concept, a general design stage and a detailed design stage. The model has been successful in predicting the future state of the system with high exactitude while being circa. 3000 times faster than a conventional simulation

    Demystifying RNN with Mathematical and Graphical Insights with Application to Time Series Analysis and Natural Language Processing (NLP)

    Get PDF
    Recurrent Neural Networks (RNNs) are a type of neural network that maintains a hidden state, preserving information from previous inputs, which enables them to comprehend and generate sequences. RNNs excel in handling tasks involving sequential data over time, particularly in Natural Language Processing (NLP), including applications like voice recognition, music generation, and image captioning. Training RNNs with long sequences presents several challenges. To tackle these challenges, advanced techniques such as Long Short-Term Memory (LSTM), Gated Recurrent Units (GRU), and Bidirectional RNN (BRNNs) are available in the literature. This article comprehensively explains the fundamental processes of RNNs, including the variants LSTM, GRU, and BRNN, with mathematical and graphical insights. The process is explained using detailed mathematical expressions and algorithmic constructs. The article includes a hands-on worked-out example demonstrating the word prediction. Additionally, it includes an application involving sentiment analysis and compares the performance of simple RNNs, LSTM, GRU, and BRNNs using Transfer Learning. The article also includes an example of time series analysis problem

    A systematic review on sequence-to-sequence learning with neural network and its models

    Get PDF
    We develop a precise writing survey on sequence-to-sequence learning with neural network and its models. The primary aim of this report is to enhance the knowledge of the sequence-to-sequence neural network and to locate the best way to deal with executing it. Three models are mostly used in sequence-to-sequence neural network applications, namely: recurrent neural networks (RNN), connectionist temporal classification (CTC), and attention model. The evidence we adopted in conducting this survey included utilizing the examination inquiries or research questions to determine keywords, which were used to search for bits of peer-reviewed papers, articles, or books at scholastic directories. Through introductory hunts, 790 papers, and scholarly works were found, and with the assistance of choice criteria and PRISMA methodology, the number of papers reviewed decreased to 16. Every one of the 16 articles was categorized by their contribution to each examination question, and they were broken down. At last, the examination papers experienced a quality appraisal where the subsequent range was from 83.3% to 100%. The proposed systematic review enabled us to collect, evaluate, analyze, and explore different approaches of implementing sequence-to-sequence neural network models and pointed out the most common use in machine learning. We followed a methodology that shows the potential of applying these models to real-world applications
    • …
    corecore