For solving a broad class of nonconvex programming problems on an unbounded
constraint set, we provide a self-adaptive step-size strategy that does not
include line-search techniques and establishes the convergence of a generic
approach under mild assumptions. Specifically, the objective function may not
satisfy the convexity condition. Unlike descent line-search algorithms, it does
not need a known Lipschitz constant to figure out how big the first step should
be. The crucial feature of this process is the steady reduction of the step
size until a certain condition is fulfilled. In particular, it can provide a
new gradient projection approach to optimization problems with an unbounded
constrained set. The correctness of the proposed method is verified by
preliminary results from some computational examples. To demonstrate the
effectiveness of the proposed technique for large-scale problems, we apply it
to some experiments on machine learning, such as supervised feature selection,
multi-variable logistic regressions and neural networks for classification