This paper gives a theoretical analysis of high dimensional linear
discrimination of Gaussian data. We study the excess risk of linear
discriminant rules. We emphasis on the poor performances of standard procedures
in the case when dimension p is larger than sample size n. The corresponding
theoretical results are non asymptotic lower bounds. On the other hand, we
propose two discrimination procedures based on dimensionality reduction and
provide associated rates of convergence which can be O(log(p)/n) under sparsity
assumptions. Finally all our results rely on a theorem that provides simple
sharp relations between the excess risk and an estimation error associated to
the geometric parameters defining the used discrimination rule