A frequently faced task in experimental physics is to measure the probability
distribution of some quantity. Often this quantity to be measured is smeared by
a non-ideal detector response or by some physical process. The procedure of
removing this smearing effect from the measured distribution is called
unfolding, and is a delicate problem in signal processing, due to the
well-known numerical ill behavior of this task. Various methods were invented
which, given some assumptions on the initial probability distribution, try to
regularize the unfolding problem. Most of these methods definitely introduce
bias into the estimate of the initial probability distribution. We propose a
linear iterative method, which has the advantage that no assumptions on the
initial probability distribution is needed, and the only regularization
parameter is the stopping order of the iteration, which can be used to choose
the best compromise between the introduced bias and the propagated statistical
and systematic errors. The method is consistent: "binwise" convergence to the
initial probability distribution is proved in absence of measurement errors
under a quite general condition on the response function. This condition holds
for practical applications such as convolutions, calorimeter response
functions, momentum reconstruction response functions based on tracking in
magnetic field etc. In presence of measurement errors, explicit formulae for
the propagation of the three important error terms is provided: bias error,
statistical error, and systematic error. A trade-off between these three error
terms can be used to define an optimal iteration stopping criterion, and the
errors can be estimated there. We provide a numerical C library for the
implementation of the method, which incorporates automatic statistical error
propagation as well.Comment: Proceedings of ACAT-2011 conference (Uxbridge, United Kingdom), 9
pages, 5 figures, changes of corrigendum include