The authors consider the problem of estimating the density g of independent
and identically distributed variables X_i, from a sample Z_1,...,Z_n
where Z_i=X_i+σϵ_i, i=1,...,n, ϵ is a noise
independent of X, with σϵ having known distribution. They
present a model selection procedure allowing to construct an adaptive estimator
of g and to find non-asymptotic bounds for its
L_2(R)-risk. The estimator achieves the minimax rate of
convergence, in most cases where lowers bounds are available. A simulation
study gives an illustration of the good practical performances of the method