We derive a general Convex Linearly Con-strained Program (CLCP) parameterized by a matrix G, constructed from the informa-tion given by the input-output pairs. The CLCP then chooses a set of regularization and loss functions in order to impose con-straints for the learning task. We show that several algorithms, including the SVM, LP-Boost, Ridge Regression etc., can be solved using the same optimization framework when the appropriate choice of G, regularization and loss function are made. Due to this uni-fication we show that if G is constructed from more complex input-output paired in-formation then we can solve more difficult problems such as structured output learn-ing, with the same complexity as a regres-sion/classification problem. We discuss var-ious different forms of G and then show on some real world enzyme prediction tasks, re-quiring structured outputs, that our method performs as well as the state-of-the-art.
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.