1 research outputs found
Provable Non-Convex Optimization and Algorithm Validation via Submodularity
Submodularity is one of the most well-studied properties of problem classes
in combinatorial optimization and many applications of machine learning and
data mining, with strong implications for guaranteed optimization. In this
thesis, we investigate the role of submodularity in provable non-convex
optimization and validation of algorithms. A profound understanding which
classes of functions can be tractably optimized remains a central challenge for
non-convex optimization. By advancing the notion of submodularity to continuous
domains (termed "continuous submodularity"), we characterize a class of
generally non-convex and non-concave functions -- continuous submodular
functions, and derive algorithms for approximately maximizing them with strong
approximation guarantees. Meanwhile, continuous submodularity captures a wide
spectrum of applications, ranging from revenue maximization with general
marketing strategies, MAP inference for DPPs to mean field inference for
probabilistic log-submodular models, which renders it as a valuable domain
knowledge in optimizing this class of objectives. Validation of algorithms is
an information-theoretic framework to investigate the robustness of algorithms
to fluctuations in the input/observations and their generalization ability. We
investigate various algorithms for one of the paradigmatic unconstrained
submodular maximization problem: MaxCut. Due to submodularity of the MaxCut
objective, we are able to present efficient approaches to calculate the
algorithmic information content of MaxCut algorithms. The results provide
insights into the robustness of different algorithmic techniques for MaxCut.Comment: PhD thesis of Yatao (An) Bian; It is about continuous submodular
optimization and algorithm validatio