The optical implementation of neural networks can be realized by storing the weights in holograms with a limited number of gray values. Motivated by this fact, we focused our investigation in this thesis on analyzing the dependence of the generalization and training errors of a simple perceptron with discrete weights, on the training set size, and on the number of allowed discrete values