2 research outputs found

    Non-smooth M-estimator for maximum consensus estimation

    No full text
    This paper revisits the application of M-estimators for a spectrum of robust estimation problems in computer vision, particularly with the maximum consensus criterion. Current practice makes use of smooth robust loss functions, e.g. Huber loss, which enables M-estimators to be tackled by such well-known optimization techniques as Iteratively Re-weighted Least Square (IRLS). When consensus maximization is used as loss function for M-estimators, however, the optimization problem becomes non-smooth. Our paper proposes an approach to resolve this issue. Based on the Alternating Direction Method of Multiplier (ADMM) technique, we develop a deterministic algorithm that is provably convergent, which enables the maximum consensus problem to be solved in the context of M-estimator. We further show that our algorithm outperforms other differentiable robust loss functions that are currently used by many practitioners. Notably, the proposed method allows the sub-problems to be solved efficiently in parallel, thus entails it to be implemented in distributed settings

    Non-smooth M-estimator for maximum consensus estimation

    No full text
    This paper revisits the application of M-estimators for a spectrum of robust estimation problems in computer vision, particularly with the maximum consensus criterion. Current practice makes use of smooth robust loss functions, e.g. Huber loss, which enables M-estimators to be tackled by such well-known optimization techniques as Iteratively Re-weighted Least Square (IRLS). When consensus maximization is used as loss function for M-estimators, however, the optimization problem becomes non-smooth. Our paper proposes an approach to resolve this issue. Based on the Alternating Direction Method of Multiplier (ADMM) technique, we develop a deterministic algorithm that is provably convergent, which enables the maximum consensus problem to be solved in the context of M-estimator. We further show that our algorithm outperforms other differentiable robust loss functions that are currently used by many practitioners. Notably, the proposed method allows the sub-problems to be solved efficiently in parallel, thus entails it to be implemented in distributed settings.</p
    corecore