408 research outputs found

    Optimal Dynamic Portfolio with Mean-CVaR Criterion

    Full text link
    Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) are popular risk measures from academic, industrial and regulatory perspectives. The problem of minimizing CVaR is theoretically known to be of Neyman-Pearson type binary solution. We add a constraint on expected return to investigate the Mean-CVaR portfolio selection problem in a dynamic setting: the investor is faced with a Markowitz type of risk reward problem at final horizon where variance as a measure of risk is replaced by CVaR. Based on the complete market assumption, we give an analytical solution in general. The novelty of our solution is that it is no longer Neyman-Pearson type where the final optimal portfolio takes only two values. Instead, in the case where the portfolio value is required to be bounded from above, the optimal solution takes three values; while in the case where there is no upper bound, the optimal investment portfolio does not exist, though a three-level portfolio still provides a sub-optimal solution

    Constrained Convex Neyman-Pearson Classification Using an Outer Approximation Splitting Method

    Get PDF
    This paper presents an algorithm for Neyman-Pearson classification. While empirical riskminimization approaches focus on minimizing a global risk, the Neyman-Pearson frameworkminimizes the type II risk under an upper bound constraint on the type I risk. Sincethe 0=1 loss function is not convex, optimization methods employ convex surrogates thatlead to tractable minimization problems. As shown in recent work, statistical bounds canbe derived to quantify the cost of using such surrogates instead of the exact 1/0 loss.However, no specific algorithm has yet been proposed to actually solve the resulting minimizationproblem numerically. The contribution of this paper is to propose an efficientsplitting algorithm to address this issue. Our method alternates a gradient step on the objectivesurrogate risk and an approximate projection step onto the constraint set, which isimplemented by means of an outer approximation subgradient projection algorithm. Experimentson both synthetic data and biological data show the efficiency of the proposed method

    GBM-based Bregman Proximal Algorithms for Constrained Learning

    Full text link
    As the complexity of learning tasks surges, modern machine learning encounters a new constrained learning paradigm characterized by more intricate and data-driven function constraints. Prominent applications include Neyman-Pearson classification (NPC) and fairness classification, which entail specific risk constraints that render standard projection-based training algorithms unsuitable. Gradient boosting machines (GBMs) are among the most popular algorithms for supervised learning; however, they are generally limited to unconstrained settings. In this paper, we adapt the GBM for constrained learning tasks within the framework of Bregman proximal algorithms. We introduce a new Bregman primal-dual method with a global optimality guarantee when the learning objective and constraint functions are convex. In cases of nonconvex functions, we demonstrate how our algorithm remains effective under a Bregman proximal point framework. Distinct from existing constrained learning algorithms, ours possess a unique advantage in their ability to seamlessly integrate with publicly available GBM implementations such as XGBoost (Chen and Guestrin, 2016) and LightGBM (Ke et al., 2017), exclusively relying on their public interfaces. We provide substantial experimental evidence to showcase the effectiveness of the Bregman algorithm framework. While our primary focus is on NPC and fairness ML, our framework holds significant potential for a broader range of constrained learning applications. The source code is currently freely available at https://github.com/zhenweilin/ConstrainedGBM}{https://github.com/zhenweilin/ConstrainedGBM

    Proximally Constrained Methods for Weakly Convex Optimization with Weakly Convex Constraints

    Full text link
    Optimization models with non-convex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or Neyman-Pearson classification with non-convex loss. Although many efficient methods have been developed with theoretical convergence guarantees for non-convex unconstrained problems, it remains a challenge to design provably efficient algorithms for problems with non-convex functional constraints. This paper proposes a class of subgradient methods for constrained optimization where the objective function and the constraint functions are are weakly convex. Our methods solve a sequence of strongly convex subproblems, where a proximal term is added to both the objective function and each constraint function. Each subproblem can be solved by various algorithms for strongly convex optimization. Under a uniform Slater's condition, we establish the computation complexities of our methods for finding a nearly stationary point
    • …
    corecore