The advent of large data-set in cosmology has meant that in the past 10 or 20
years our knowledge and understanding of the Universe has changed not only
quantitatively but also, and most importantly, qualitatively. Cosmologists rely
on data where a host of useful information is enclosed, but is encoded in a
non-trivial way. The challenges in extracting this information must be overcome
to make the most of a large experimental effort. Even after having converged to
a standard cosmological model (the LCDM model) we should keep in mind that this
model is described by 10 or more physical parameters and if we want to study
deviations from it, the number of parameters is even larger. Dealing with such
a high dimensional parameter space and finding parameters constraints is a
challenge on itself. Cosmologists want to be able to compare and combine
different data sets both for testing for possible disagreements (which could
indicate new physics) and for improving parameter determinations. Finally,
cosmologists in many cases want to find out, before actually doing the
experiment, how much one would be able to learn from it. For all these reasons,
sophisiticated statistical techniques are being employed in cosmology, and it
has become crucial to know some statistical background to understand recent
literature in the field. I will introduce some statistical tools that any
cosmologist should know about in order to be able to understand recently
published results from the analysis of cosmological data sets. I will not
present a complete and rigorous introduction to statistics as there are several
good books which are reported in the references. The reader should refer to
those.Comment: 31, pages, 6 figures, notes from 2nd Trans-Regio Winter school in
Passo del Tonale. To appear in Lectures Notes in Physics, "Lectures on
cosmology: Accelerated expansion of the universe" Feb 201