67 research outputs found

    Distributed Fault Tolerance in Optimal Interpolative Nets

    Get PDF
    The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault tolerance is often taken for granted in neural networks rather than being explicitly accounted for in the architecture or learning algorithm. In addition, when fault tolerance is considered, it is often accounted for using an unrealistic fault model (e.g., neurons that are stuck on or off rather than small weight perturbations). Realistic fault tolerance can be achieved through a smooth distribution of weights, resulting in low weight salience and distributed computation. Results of trained OI Nets on the Iris classification problem show that fault tolerance can be increased with the algorithm presented in this paper

    Use of Noise to Augment Training Data: A Neural Network Method of Mineral-Potential Mapping in Regions of Limited Known Deposit Examples.

    Get PDF
    One of the main factors that affects the performance ofMLPneural networks trained using the backpropagation algorithm in mineral-potential mapping is the paucity of deposit relative to barren training patterns. To overcome this problem, random noise is added to the original training patterns in order to create additional synthetic deposit training data. Experiments on the effect of the number of deposits available for training in the Kalgoorlie Terrane orogenic gold province show that both the classification performance of a trained network and the quality of the resultant prospectivity map increase significantly with increased numbers of deposit patterns. Experiments are conducted to determine the optimum amount of noise using both uniform and normally distributed random noise. Through the addition of noise to the original deposit training data, the number of deposit training patterns is increased from approximately 50 to 1000. The percentage of correct classifications significantly improves for the independent test set as well as for deposit patterns in the test set. For example, using≥40% uniform random noise, the test-set classification performance increases from 67.9% and 68.0% to 72.8% and 77.1% (for test-set overall and test-set deposit patterns, respectively). Indices for the quality of the resultant prospectivity map, (i.e. D/A, D≥(D/A), where Dis the percentage of deposits and Ais the percentage of the total area for the highest prospectivity map-class, and area under an ROC curve) also increase from 8.2, 105, 0.79 to 17.9, 226, 0.87, respectively. Increasing the size of the training-stop data set results in a further increase in classification performance to 73.5%, 77.4%, 14.7, 296, 0.87 for test-set overall and test-set deposit patterns, D/A, D≥(D/A), and area under the ROC curve, respectively

    JMASM 55: MATLAB Algorithms and Source Codes of \u27cbnet\u27 Function for Univariate Time Series Modeling with Neural Networks (MATLAB)

    Get PDF
    Artificial Neural Networks (ANN) can be designed as a nonparametric tool for time series modeling. MATLAB serves as a powerful environment for ANN modeling. Although Neural Network Time Series Tool (ntstool) is useful for modeling time series, more detailed functions could be more useful in order to get more detailed and comprehensive analysis results. For these purposes, cbnet function with properties such as input lag generator, step-ahead forecaster, trial-error based network selection strategy, alternative network selection with various performance measure and global repetition feature to obtain more alternative network has been developed, and MATLAB algorithms and source codes has been introduced. A detailed comparison with the ntstool is carried out, showing that the cbnet function covers the shortcomings of ntstool

    Self-Supervised Learning for Semantic Segmentation of Images

    Get PDF
    Artificial Neural Networks (ANN) are powerful Machine Learning (ML) models that can help solve problems that are hard or even impossible to design solutions for by hand. These models learn to exploit information present in their target datasets to solve various problems. However, labelling data can be quite expensive, time-consuming and often requires a domain expert. Therefore it would be quite beneficial if one could train a model in such a way that exploits unlabeled data. Fortunately, Self-Supervised Learning (SSL) methods are a family of learning algorithms that attempt to do just that. Many SSL methods exist, but in this thesis, we explore Barlow Twins (BT) — a siamese network based on redundancy reduction, and Image Reconstruction (IR) — a method proposed in Karnam’s thesis. In addition, we extend the Image Reconstruction method with both Coarse Cutout and Hide-and-Seek augmentations as they have been applied in similar supervised and weakly-supervised segmentation task scenarios. We apply these methods and investigate the results with the PASCAL VOC dataset
    • …
    corecore