130,055 research outputs found

    Limited Evaluation Cooperative Co-evolutionary Differential Evolution for Large-scale Neuroevolution

    Get PDF
    Many real-world control and classification tasks involve a large number of features. When artificial neural networks (ANNs) are used for modeling these tasks, the network architectures tend to be large. Neuroevolution is an effective approach for optimizing ANNs; however, there are two bottlenecks that make their application challenging in case of high-dimensional networks using direct encoding. First, classic evolutionary algorithms tend not to scale well for searching large parameter spaces; second, the network evaluation over a large number of training instances is in general time-consuming. In this work, we propose an approach called the Limited Evaluation Cooperative Co-evolutionary Differential Evolution algorithm (LECCDE) to optimize high-dimensional ANNs. The proposed method aims to optimize the pre-synaptic weights of each post-synaptic neuron in different subpopulations using a Cooperative Co-evolutionary Differential Evolution algorithm, and employs a limited evaluation scheme where fitness evaluation is performed on a relatively small number of training instances based on fitness inheritance. We test LECCDE on three datasets with various sizes, and our results show that cooperative co-evolution significantly improves the test error comparing to standard Differential Evolution, while the limited evaluation scheme facilitates a significant reduction in computing time

    High-Capacity Directional Graph Networks

    Get PDF
    Deep Neural Networks (DNN) have proven themselves to be a useful tool in many computer vision problems. One of the most popular forms of the DNN is the Convolutional Neural Network (CNN). The CNN effectively learns features on images by learning a weighted sum of local neighborhoods of pixels, creating filtered versions of the image. Point cloud analysis seems like it would benefit from this useful model. However, point clouds are much less structured than images. Many analogues to CNNs for point clouds have been proposed in the literature, but they are often much more constrained networks than the typical CNN. This is a matter of necessity: common point cloud benchmark datasets are fairly small and thus require strong regularization to mitigate overfitting. In this dissertation we propose two point cloud network models based on graph structures that achieve the high-capacity modeling capability of CNNs. In addition to showing their effectiveness on point cloud classification and segmentation in typical benchmark scenarios, we also propose two novel point cloud problems: ATLAS Detector segmentation and Computational Fluid Dynamics (CFD) surrogate modeling. We show that our networks are much more effective than others on these new problems because they benefit from deeper networks and extra capacity that other researchers have not pursued. These novel networks and datasets pave the way for future development of deeper, more sophisticated point cloud networks

    Using Regular Languages to Explore the Representational Capacity of Recurrent Neural Architectures

    Get PDF
    The presence of Long Distance Dependencies (LDDs) in sequential data poses significant challenges for computational models. Various recurrent neural architectures have been designed to mitigate this issue. In order to test these state-of-the-art architectures, there is growing need for rich benchmarking datasets. However, one of the drawbacks of existing datasets is the lack of experimental control with regards to the presence and/or degree of LDDs. This lack of control limits the analysis of model performance in relation to the specific challenge posed by LDDs. One way to address this is to use synthetic data having the properties of subregular languages. The degree of LDDs within the generated data can be controlled through the k parameter, length of the generated strings, and by choosing appropriate forbidden strings. In this paper, we explore the capacity of different RNN extensions to model LDDs, by evaluating these models on a sequence of SPk synthesized datasets, where each subsequent dataset exhibits a longer degree of LDD. Even though SPk are simple languages, the presence of LDDs does have significant impact on the performance of recurrent neural architectures, thus making them prime candidate in benchmarking tasks.Comment: International Conference of Artificial Neural Networks (ICANN) 201
    • …
    corecore