The norm of the gradient βf (x) measures the maximum descent of a
real-valued smooth function f at x. For (nonsmooth) convex functions, this is
expressed by the distance dist(0, βf (x)) of the subdifferential to
the origin, while for general real-valued functions defined on metric spaces by
the notion of metric slope |βf |(x). In this work we propose an
axiomatic definition of descent modulus T [f ](x) of a real-valued function f
at every point x, defined on a general (not necessarily metric) space. The
definition encompasses all above instances as well as average descents for
functions defined on probability spaces. We show that a large class of
functions are completely determined by their descent modulus and corresponding
critical values. This result is already surprising in the smooth case: a
one-dimensional information (norm of the gradient) turns out to be almost as
powerful as the knowledge of the full gradient mapping. In the nonsmooth case,
the key element for this determination result is the break of symmetry induced
by a downhill orientation, in the spirit of the definition of the metric slope.
The particular case of functions defined on finite spaces is studied in the
last section. In this case, we obtain an explicit classification of descent
operators that are, in some sense, typical