This work proposes a universal and adaptive second-order method for
minimizing second-order smooth, convex functions. Our algorithm achieves
O(σ/T​) convergence when the oracle feedback is stochastic with
variance σ2, and improves its convergence to O(1/T3) with
deterministic oracles, where T is the number of iterations. Our method also
interpolates these rates without knowing the nature of the oracle apriori,
which is enabled by a parameter-free adaptive step-size that is oblivious to
the knowledge of smoothness modulus, variance bounds and the diameter of the
constrained set. To our knowledge, this is the first universal algorithm with
such global guarantees within the second-order optimization literature.Comment: 32 pages, 4 figures, accepted at NeurIPS 202