SPN: A novel neural network architecture to improve the performance of MLPs

Abstract

The automated design of neural network architectures has emerged as a key frontier in modern machine learning, driven by the growing complexity of tasks and the scale of available data. Neural Architecture Search (NAS) [7] has enabled researchers to systematically explore vast design spaces, moving beyond manual trial-and-error to discover architectures that strike an optimal balance between performance and efficiency. Despite this progress, many neural network architectures, such as MultiLayer Perceptrons (MLPs) [23], remain limited by conventional connectivity patterns that restrict information flow to simple, hierarchical pathways. This thesis aims to challenge and expand this architectural paradigm. It introduces Sarosh’s Perceptron Networks (SPNs), a novel approach that breaks free from the rigid layer-by-layer connectivity of traditional MLPs and allows neurons more freedom in forming cross-layer connection that lead to more complex architectures. By allowing for more flexible and expressive patterns of neuron connectivity, SPNs seek to unlock new levels of model capability and generalization. This work investigates whether such architectural freedom can yield meaningful improvements in performance and efficiency, and examines the implications for the future of neural network architecture design and the field of NAS

Similar works

Full text

thumbnail-image

National Library of Finland DSpace Services

redirect
Last time updated on 30/12/2025

This paper was published in National Library of Finland DSpace Services.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.