5,751 research outputs found

    A second derivative SQP method: theoretical issues

    Get PDF
    Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exact-Hessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a second-derivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent-constraint is imposed on certain QP subproblems, which “guides” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established

    The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

    Get PDF
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature

    Asymptotic Analysis of Inpainting via Universal Shearlet Systems

    Get PDF
    Recently introduced inpainting algorithms using a combination of applied harmonic analysis and compressed sensing have turned out to be very successful. One key ingredient is a carefully chosen representation system which provides (optimally) sparse approximations of the original image. Due to the common assumption that images are typically governed by anisotropic features, directional representation systems have often been utilized. One prominent example of this class are shearlets, which have the additional benefitallowing faithful implementations. Numerical results show that shearlets significantly outperform wavelets in inpainting tasks. One of those software packages, www.shearlab.org, even offers the flexibility of usingdifferent parameter for each scale, which is not yet covered by shearlet theory. In this paper, we first introduce universal shearlet systems which are associated with an arbitrary scaling sequence, thereby modeling the previously mentioned flexibility. In addition, this novel construction allows for a smooth transition between wavelets and shearlets and therefore enables us to analyze them in a uniform fashion. For a large class of such scaling sequences, we first prove that the associated universal shearlet systems form band-limited Parseval frames for L2(R2)L^2(\mathbb{R}^2) consisting of Schwartz functions. Secondly, we analyze the performance for inpainting of this class of universal shearlet systems within a distributional model situation using an â„“1\ell^1-analysis minimization algorithm for reconstruction. Our main result in this part states that, provided the scaling sequence is comparable to the size of the (scale-dependent) gap, nearly-perfect inpainting is achieved at sufficiently fine scales
    • …
    corecore