2 research outputs found
Bilevel Integrative Optimization for Ill-posed Inverse Problems
Classical optimization techniques often formulate the feasibility of the
problems as set, equality or inequality constraints. However, explicitly
designing these constraints is indeed challenging for complex real-world
applications and too strict constraints may even lead to intractable
optimization problems. On the other hand, it is still hard to incorporate
data-dependent information into conventional numerical iterations. To partially
address the above limits and inspired by the leader-follower gaming
perspective, this work first introduces a bilevel-type formulation to jointly
investigate the feasibility and optimality of nonconvex and nonsmooth
optimization problems. Then we develop an algorithmic framework to couple
forward-backward proximal computations to optimize our established bilevel
leader-follower model. We prove its convergence and estimate the convergence
rate. Furthermore, a learning-based extension is developed, in which we
establish an unrolling strategy to incorporate data-dependent network
architectures into our iterations. Fortunately, it can be proved that by
introducing some mild checking conditions, all our original convergence results
can still be preserved for this learnable extension. As a nontrivial byproduct,
we demonstrate how to apply this ensemble-like methodology to address different
low-level vision tasks. Extensive experiments verify the theoretical results
and show the advantages of our method against existing state-of-the-art
approaches.Comment: arXiv admin note: text overlap with arXiv:1706.04008 by other author
Investigating Bi-Level Optimization for Learning and Vision from a Unified Perspective: A Survey and Beyond
Bi-Level Optimization (BLO) is originated from the area of economic game
theory and then introduced into the optimization community. BLO is able to
handle problems with a hierarchical structure, involving two levels of
optimization tasks, where one task is nested inside the other. In machine
learning and computer vision fields, despite the different motivations and
mechanisms, a lot of complex problems, such as hyper-parameter optimization,
multi-task and meta-learning, neural architecture search, adversarial learning
and deep reinforcement learning, actually all contain a series of closely
related subproblms. In this paper, we first uniformly express these complex
learning and vision problems from the perspective of BLO. Then we construct a
best-response-based single-level reformulation and establish a unified
algorithmic framework to understand and formulate mainstream gradient-based BLO
methodologies, covering aspects ranging from fundamental automatic
differentiation schemes to various accelerations, simplifications, extensions
and their convergence and complexity properties. Last but not least, we discuss
the potentials of our unified BLO framework for designing new algorithms and
point out some promising directions for future research.Comment: 23 page