Cone regression is a particular case of quadratic programming that minimizes
a weighted sum of squared residuals under a set of linear inequality
constraints. Several important statistical problems such as isotonic, concave
regression or ANOVA under partial orderings, just to name a few, can be
considered as particular instances of the cone regression problem. Given its
relevance in Statistics, this paper aims to address the fundamentals of cone
regression from a theoretical and practical point of view. Several formulations
of the cone regression problem are considered and, focusing on the particular
case of concave regression as example, several algorithms are analyzed and
compared both qualitatively and quantitatively through numerical simulations.
Several improvements to enhance numerical stability and bound the computational
cost are proposed. For each analyzed algorithm, the pseudo-code and its
corresponding code in Scilab are provided. The results from this study
demonstrate that the choice of the optimization approach strongly impacts the
numerical performances. It is also shown that methods are not currently
available to solve efficiently cone regression problems with large dimension
(more than many thousands of points). We suggest further research to fill this
gap by exploiting and adapting classical multi-scale strategy to compute an
approximate solution