We start with the algorithm of Ferson et al. (\emph{Reliable computing} {\bf
11}(3), p.~207--233, 2005), designed for solving a certain NP-hard problem
motivated by robust statistics.
First, we propose an efficient implementation of the algorithm and improve
its complexity bound to O(nlogn+n⋅2ω), where ω is the
clique number in a certain intersection graph. Then we treat input data as
random variables (as it is usual in statistics) and introduce a natural
probabilistic data generating model. On average, we get 2ω=O(n1/loglogn) and ω=O(logn/loglogn). This results in
average computing time O(n1+ϵ) for ϵ>0 arbitrarily
small, which may be considered as ``surprisingly good'' average time complexity
for solving an NP-hard problem. Moreover, we prove the following tail bound on
the distribution of computation time: ``hard'' instances, forcing the algorithm
to compute in time 2Ω(n), occur rarely, with probability tending to
zero faster than exponentially with n→∞