3 research outputs found

    Desarrollo de entorno online de programación para computación natural

    Full text link
    Máster en Investigación e Innovación en Tecnologías de la Información y las Comunicaciones (i2- TIC).This work proposes a natural computer programming (for CA and NEPs) environment platform using Blockly. The platform is a web-based tool that provides simulators for two well-known natural computing systems: Cellular Automata (CA) and Network of Evolutionary Processors (NEPs). CA programming blocks presented in this work provide the ability to design and implement several types of CA including Elementary cellular automata, 2D cellular automata, and nD cellular automata. The tool also provides a graphical representation of CA’s grid through projection for any CA that has 3 or more dimensions. A NEPs Blockly programming environment is presented in this work. It provides the ability to design and simulate NEPs. Blocks are used as flexible user interface to enter NEPs specifications. The blocks automatically generate a standard XML configurations code which can be sent to the server side of the simulator for implementation. The tool also provides a graphical representation for the static topology of the system. Both CA and NEPs Blockly programming environments have been tested in several rather academic examples. The work presents an online simulation platform for natural computing algorithm using visual programing tool, namely Blockly. The proposed platform provides software engineering tools for setting up algorithms and also ease of use especially for teaching of these algorithm. The software engineering tools has been implemented on the NEPs as there is much more software tools already presented for cellular automata. The software designed for NEPs are a set of blocks to implement several types of connections between nodes. These blocks reduce time and complexity in setting up NEPs with fully connected nodes, for instance. In the other hand, cellular automata algorithm has been chosen to test the ease of the process of teaching and learning natural computing algorithms as they are much better-known model. The test has been conducted with students, teachers and researchers. Results of the experiment showed that the CA Blockly simulator outperforms traditional manual methods of implementing CA. It also showed that the proposed environment has desired features such as ease of use and decreases learning time. The NEPs part of the system has been tested against several applications. It showed that it provides a flexible designing tool for NEPs. It outperforms traditional XML coding methods in terms of ease of use and designing time. In addition we have designed specific high level constructs that automatize in some way the specific of complex NEPs’ topologies by hand. They could be considered as embryonic software engineering tools to program NEPs. Our tool is considered a generic platform for web-based implementation. It has desired features and wide range of properties that could attract the scientific community to adapt and develop in the future

    Review on matrix pseudo-inverse using singular value decomposition-SVD and application to regression

    No full text
    Singular Value Decomposition (SVD) is one of the most factorization of the real or complex mathematical matrix problems. In this paper, one of the most significant applications of the Signa gular Value Decomposition (SVD) which is the Matrix decomposition is being selected to be described and explained as a regression model. The experimental results show that the SVD regression using Matrix-Pseudo Inverse results are more realistic and nearly as expected that the simple regression model when the results have been compared between the simple regression model and the SVD regression model based on the Matrix-Pseudo Inverse model based on implement them on the same dataset (data points). In this paper, two main cases are discussed. The first one is the insertable matrix pseudo-inverse, and the non-invertible matrix pseudo-inverse. Both cases are mainly discussed with a relative example given which shows that main approach that is used to compute based on the Singular Value Decomposition.</p

    A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications

    No full text
    Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for many applications dismissing the use of DL. Having sufficient data is the first step toward any successful and trustworthy DL application. This paper presents a holistic survey on state-of-the-art techniques to deal with training DL models to overcome three challenges including small, imbalanced datasets, and lack of generalization. This survey starts by listing the learning techniques. Next, the types of DL architectures are introduced. After that, state-of-the-art solutions to address the issue of lack of training data are listed, such as Transfer Learning (TL), Self-Supervised Learning (SSL), Generative Adversarial Networks (GANs), Model Architecture (MA), Physics-Informed Neural Network (PINN), and Deep Synthetic Minority Oversampling Technique (DeepSMOTE). Then, these solutions were followed by some related tips about data acquisition needed prior to training purposes, as well as recommendations for ensuring the trustworthiness of the training dataset. The survey ends with a list of applications that suffer from data scarcity, several alternatives are proposed in order to generate more data in each application including Electromagnetic Imaging (EMI), Civil Structural Health Monitoring, Medical imaging, Meteorology, Wireless Communications, Fluid Mechanics, Microelectromechanical system, and Cybersecurity. To the best of the authors’ knowledge, this is the first review that offers a comprehensive overview on strategies to tackle data scarcity in DL
    corecore