6 research outputs found

    Computing the Performance of FFNN for Classifying Purposes

    Get PDF
     Classification is one of the most hourly encountered problems in real world. Neural networks have emerged as one of the tools that can handle the classification problem. Feed-Forward Neural Networks (FFNN's) have been widely applied in many different fields as a classification tool. Designing an efficient FFNN structure with the optimum number of hidden layers and minimum number of layer's neurons for a given specific application or dataset, is an open research problem and more challenging depend on the input data. The random selections of hidden layers and neurons may cause the problem of either under fitting or over fitting. Over fitting arises because the network matches the data so closely as to lose its generalization ability over the test data. In this research, the classification performance using the Mean Square Error (MSE) of Feed-Forward Neural Network (FFNN) with back-propagation algorithm with respect to the different number of hidden layers and hidden neurons is computed and analyzed to find out the optimum number of hidden layers and minimum number of layer's neurons to help the existing classification concepts by MATLAB version 13a. By this process, firstly the random data has been generated using an suitable matlab function to prepare the training data as the input and target vectors as the testing data for the classification purposes of FFNN. The generated input data is passed on to the output layer through the hidden layers which process these data. From this analysis, it is find out from the mean square error comparison graphs and regression plots that for getting the best performance form this network, it is better to use the high number of hidden layers and more neurons in the hidden layers in the network during designing its classifier but so more neurons in the hidden layers and the high number of hidden layers in the network makes it complex and takes more time to execute. So as the result it is suggested that three hidden layers and 26 hidden neurons in each hidden layers are better for designing the classifier of this network for this type of input data features

    Automatic optimum order assignment in IIR adaptive filters

    Get PDF
    金沢大学理工研究域 電子情報学

    Automatic optimum order assignment in IIR adaptive filters

    No full text
    corecore