Optimization models with non-convex constraints arise in many tasks in
machine learning, e.g., learning with fairness constraints or Neyman-Pearson
classification with non-convex loss. Although many efficient methods have been
developed with theoretical convergence guarantees for non-convex unconstrained
problems, it remains a challenge to design provably efficient algorithms for
problems with non-convex functional constraints. This paper proposes a class of
subgradient methods for constrained optimization where the objective function
and the constraint functions are are weakly convex. Our methods solve a
sequence of strongly convex subproblems, where a proximal term is added to both
the objective function and each constraint function. Each subproblem can be
solved by various algorithms for strongly convex optimization. Under a uniform
Slater's condition, we establish the computation complexities of our methods
for finding a nearly stationary point