Abstract

Over the years, several methods have been proposed to compute galaxy luminosity functions, from the most simple ones -counting sample objects inside a given volume- to very sophisticated ones -like the C- method, the STY method or the Choloniewski method, among others. However, only the V/Vmax method is usually employed in computing the white dwarf luminosity function and other methods have not been applied so far to the observational sample of spectroscopically identified white dwarfs. Moreover, the statistical significance of the white dwarf luminosity function has also received little attention and a thorough study still remains to be done. In this paper we study, using a controlled synthetic sample of white dwarfs generated using a Monte Carlo simulator, which is the statistical significance of the white dwarf luminosity function and which are the expected biases. We also present a comparison between different estimators for computing the white dwarf luminosity function. We find that for sample sizes large enough the V/Vmax method provides a reliable characterization of the white dwarf luminosity function, provided that the input sample is selected carefully. Particularly, the V/Vmax method recovers well the position of the cut-off of the white dwarf luminosity function. However, this method turns out to be less robust than the Choloniewski method when the possible incompletenesses of the sample are taken into account. We also find that the Choloniewski method performs better than the V/Vmax method in estimating the overall density of white dwarfs, but misses the exact location of the cut-off of the white dwarf luminosity function.Comment: 14 pages, 12 figures, accepted for publication in MNRA

    Similar works