For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a density for which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears that placing conditions on the densities (monotonicity, convexity, smoothness) does not help.Density estimation, variable Kernel estimate, convergence, smoothing factor, minimax lower bounds, asymptotic optimality
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.