We study the problem of zero-order optimization of a strongly convex
function. The goal is to find the minimizer of the function by a sequential
exploration of its values, under measurement noise. We study the impact of
higher order smoothness properties of the function on the optimization error
and on the cumulative regret. To solve this problem we consider a randomized
approximation of the projected gradient descent algorithm. The gradient is
estimated by a randomized procedure involving two function evaluations and a
smoothing kernel. We derive upper bounds for this algorithm both in the
constrained and unconstrained settings and prove minimax lower bounds for any
sequential search method. Our results imply that the zero-order algorithm is
nearly optimal in terms of sample complexity and the problem parameters. Based
on this algorithm, we also propose an estimator of the minimum value of the
function achieving almost sharp oracle behavior. We compare our results with
the state-of-the-art, highlighting a number of key improvements