22,317 research outputs found

    Provably Convergent Plug-and-Play Quasi-Newton Methods

    Get PDF
    Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality

    Provably Convergent Plug-and-Play Quasi-Newton Methods

    Full text link
    Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate significantly faster convergence as compared to other provable PnP methods with similar convergence results

    Provably Convergent Plug-and-Play Quasi-Newton Methods

    Get PDF
    Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging. Provable PnP methods are a subclass of PnP methods with convergence guarantees, such as fixed point convergence or convergence to critical points of some energy function. Many existing provable PnP methods impose heavy restrictions on the denoiser or fidelity function, such as non-expansiveness or strict convexity, respectively. In this work, we propose a novel algorithmic approach incorporating quasi-Newton steps into a provable PnP framework based on proximal denoisers, resulting in greatly accelerated convergence while retaining light assumptions on the denoiser. By characterizing the denoiser as the proximal operator of a weakly convex function, we show that the fixed points of the proposed quasi-Newton PnP algorithm are critical points of a weakly convex function. Numerical experiments on image deblurring and super-resolution demonstrate 2--8x faster convergence as compared to other provable PnP methods with similar reconstruction quality

    Learning Robust Deep Equilibrium Models

    Full text link
    Deep equilibrium (DEQ) models have emerged as a promising class of implicit layer models in deep learning, which abandon traditional depth by solving for the fixed points of a single nonlinear layer. Despite their success, the stability of the fixed points for these models remains poorly understood. Recently, Lyapunov theory has been applied to Neural ODEs, another type of implicit layer model, to confer adversarial robustness. By considering DEQ models as nonlinear dynamic systems, we propose a robust DEQ model named LyaDEQ with guaranteed provable stability via Lyapunov theory. The crux of our method is ensuring the fixed points of the DEQ models are Lyapunov stable, which enables the LyaDEQ models to resist minor initial perturbations. To avoid poor adversarial defense due to Lyapunov-stable fixed points being located near each other, we add an orthogonal fully connected layer after the Lyapunov stability module to separate different fixed points. We evaluate LyaDEQ models on several widely used datasets under well-known adversarial attacks, and experimental results demonstrate significant improvement in robustness. Furthermore, we show that the LyaDEQ model can be combined with other defense methods, such as adversarial training, to achieve even better adversarial robustness

    On the Bourbaki-Witt Principle in Toposes

    Full text link
    The Bourbaki-Witt principle states that any progressive map on a chain-complete poset has a fixed point above every point. It is provable classically, but not intuitionistically. We study this and related principles in an intuitionistic setting. Among other things, we show that Bourbaki-Witt fails exactly when the trichotomous ordinals form a set, but does not imply that fixed points can always be found by transfinite iteration. Meanwhile, on the side of models, we see that the principle fails in realisability toposes, and does not hold in the free topos, but does hold in all cocomplete toposes

    Iterated reflection principles over full disquotational truth

    Get PDF
    Iterated reflection principles have been employed extensively to unfold epistemic commitments that are incurred by accepting a mathematical theory. Recently this has been applied to theories of truth. The idea is to start with a collection of Tarski-biconditionals and arrive by finitely iterated reflection at strong compositional truth theories. In the context of classical logic it is incoherent to adopt an initial truth theory in which A and 'A is true' are inter-derivable. In this article we show how in the context of a weaker logic, which we call Basic De Morgan Logic, we can coherently start with such a fully disquotational truth theory and arrive at a strong compositional truth theory by applying a natural uniform reflection principle a finite number of times

    Intuitionistic fixed point theories over Heyting arithmetic

    Full text link
    In this paper we show that an intuitionistic theory for fixed points is conservative over the Heyting arithmetic with respect to a certain class of formulas. This extends partly the result of mine. The proof is inspired by the quick cut-elimination due to G. Mints
    • ā€¦
    corecore