Improvement of Neural Network Learning Performance by Resting and Working State

Abstract

Currently, many researchers research about relationships of a rest to a work in a field of biomechanical science. Because taking the rests improve efficiency of our works. In fact, we sometime decrease our concentration power by hard works. Then, we must take a rest, because we become tired by a same work for a long time. However, if we take too much rest, the work efficiency is decreased. We should consider the balance between the rest and the work. In this study, we propose a Multi-Layer Perceptron with Resting State (RSMLP). The RSMLP has two different states, one of the state is a resting state, on the other state is a working state. By computer simulations, we confirm that the RSMLP has better performance than the conventional MLP and the MLP with random noise by learning a step function. 1

Similar works

Full text

thumbnail-image

CiteSeerX

redirect
Last time updated on 30/10/2017

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.