2 research outputs found

    Analysis of Multi-Activation Layers in Neural Network Architectures

    Get PDF
    A common practice for developing a Neural Network architecture is to build models in which each layer has a single activation function that is applied to all nodes uniformly. This paper explores the effects of using multiple different activation functions per layer of a neural network to analyze the effects of the architectures’ ability to generalize datasets. This approach could allow neural networks to better generalize complex data providing better performances than networks with uniform activation functions. The approach was tested on a fully connected neural network and compared to traditional models, an identical network with uniform activations, and an identical network with a different activation function for each layer. The models are tested on several problem types and datasets and analyzed to compare the performance and training time

    Implementation of Natural Language Processing Language Models to Generate Executable Code

    No full text
    Our scientific literature is abundant with mathematical concepts firmly based on proofs. The Curry-Howard correspondence establishes a connection between mathematical proofs and computer programs, suggesting that we can extract computer programs from mathematical literature and vice versa. While mathematical rigor varies among fields and authors, those seeking to tackle the problem may be daunted by the layers of jargon and innumerable notations they may encounter. By analyzing Mathematics and physics-based academic literature through various natural language processing techniques, the analysis will generate an executable code required to test the provided equations and precisely describe the code
    corecore