7,903 research outputs found

    Ant-based Neural Topology Search (ANTS) for Optimizing Recurrent Networks

    Get PDF
    Hand-crafting effective and efficient structures for recurrent neural networks (RNNs) is a difficult, expensive, and time-consuming process. To address this challenge, we propose a novel neuro-evolution algorithm based on ant colony optimization (ACO), called Ant-based Neural Topology Search (ANTS), for directly optimizing RNN topologies. The procedure selects from multiple modern recurrent cell types such as ∆-RNN, GRU, LSTM, MGU and UGRNN cells, as well as recurrent connections which may span multiple layers and/or steps of time. In order to introduce an inductive bias that encourages the formation of sparser synaptic connectivity patterns, we investigate several variations of the core algorithm. We do so primarily by formulating different functions that drive the underlying pheromone simulation process (which mimic L1 and L2 regularization in standard machine learning) as well as by introducing ant agents with specialized roles (inspired by how real ant colonies operate), i.e., explorer ants that construct the initial feed forward structure and social ants which select nodes from the feed forward connections to subsequently craft recurrent memory structures. We also incorporate communal intelligence, where best weights are shared by the ant colony for weight initialization, reducing the number of backpropagation epochs required to locally train candidate RNNs, speeding up the neuro-evolution process. Our results demonstrate that the sparser RNNs evolved by ANTS significantly outperform traditional one and two layer architectures consisting of modern memory cells, as well as the well-known NEAT algorithm. Furthermore, we improve upon prior state-of-the-art results on the time series dataset utilized in our experiments

    Nature-Inspired Topology Optimization of Recurrent Neural Networks

    Get PDF
    Hand-crafting effective and efficient structures for recurrent neural networks (RNNs) is a difficult, expensive, and time-consuming process. To address this challenge, this work presents three nature-inspired (NI) algorithms for neural architecture search (NAS), introducing the subfield of nature-inspired neural architecture search (NI-NAS). These algorithms, based on ant colony optimization (ACO), progress from memory cell structure optimization, to bounded discrete-space architecture optimization, and finally to unbounded continuous-space architecture optimization. These methods were applied to real-world data sets representing challenging engineering problems, such as data from a coal-fired power plant, wind-turbine power generators, and aircraft flight data recorder (FDR) data. Initial work utilized ACO to select optimal connections inside recurrent long short-term memory (LSTM) cell structures. Viewing each LSTM cell as a graph, ants would choose potential input and output connections based on the pheromones previously laid down over those connections as done in a standard ACO search. However, this approach did not optimize the overall network of the RNN, particularly its synaptic parameters. I addressed this issue by introducing the Ant-based Neural Topology Search (ANTS) algorithm to directly optimize the entire RNN topology. ANTS utilizes a discrete-space superstructure representing a completely connected RNN where each node is connected to every other node, forming an extremely dense mesh of edges and recurrent edges. ANTS can select from a library of modern RNN memory cells. ACO agents (ants), in this thesis, build RNNs from the superstructure determined by pheromones laid out on the superstructure\u27s connections. Backpropagation is then used to train the generated RNNs in an asynchronous parallel computing design to accelerate the optimization process. The pheromone update depends on the evaluation of the tested RNN against a population of best performing RNNs. Several variations of the core algorithm was investigated to test several designed heuristics for ANTS and evaluate their efficacy in the formation of sparser synaptic connectivity patterns. This was done primarily by formulating different functions that drive the underlying pheromone simulation process as well as by introducing ant agents with 3 specialized roles (inspired by real-world ants) to construct the RNN structure. This characterization of the agents enables ants to focus on specific structure building roles. ``Communal intelligence\u27\u27 was also incorporated, where the best set of weights was across locally-trained RNN candidates for weight initialization, reducing the number of backpropagation epochs required to train each candidate RNN and speeding up the overall search process. However, the growth of the superstructure increased by an order of magnitude, as more input and deeper structures are utilized, proving to be one limitation of the proposed procedure. The limitation of ANTS motivated the development of the continuous ANTS algorithm (CANTS), which works with a continuous search space for any fixed network topology. In this process, ants moving within a (temporally-arranged) set of continuous/real-valued planes based on proximity and density of pheromone placements. The motion of the ants over these continuous planes, in a sense, more closely mimicks how actual ants move in the real world. Ants traverse a 3-dimensional space from the inputs to the outputs and across time lags. This continuous search space frees the ant agents from the limitations imposed by ANTS\u27 discrete massively connected superstructure, making the structural options unbounded when mapping the movements of ants through the 3D continuous space to a neural architecture graph. In addition, CANTS has fewer hyperparameters to tune than ANTS, which had five potential heuristic components that each had their own unique set of hyperparameters, as well as requiring the user to define the maximum recurrent depth, number of layers and nodes within each layer. CANTS only requires specifying the number ants and their pheromone sensing radius. The three applied strategies yielded three important successes. Applying ACO on optimizing LSTMs yielded a 1.34\% performance enhancement and more than 55% sparser structures (which is useful for speeding up inference). ANTS outperformed the NAS benchmark, NEAT, and the NAS state-of-the-art algorithm, EXAMM. CANTS showed competitive results to EXAMM and competed with ANTS while offering sparser structures, offering a promising path forward for optimizing (temporal) neural models with nature-inspired metaheuristics based the metaphor of ants
    • …
    corecore