198,108 research outputs found

    Parallel extragradient-proximal methods for split equilibrium problems

    Get PDF
    In this paper, we introduce two parallel extragradient-proximal methods for solving split equilibrium problems. The algorithms combine the extragradient method, the proximal method and the hybrid (outer approximation) method. The weak and strong convergence theorems for iterative sequences generated by the algorithms are established under widely used assumptions for equilibrium bifunctions.Comment: 13 pages, submitte

    High-order space-time finite element schemes for acoustic and viscodynamic wave equations with temporal decoupling

    Get PDF
    Copyright @ 2014 The Authors. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.We revisit a method originally introduced by Werder et al. (in Comput. Methods Appl. Mech. Engrg., 190:6685–6708, 2001) for temporally discontinuous Galerkin FEMs applied to a parabolic partial differential equation. In that approach, block systems arise because of the coupling of the spatial systems through inner products of the temporal basis functions. If the spatial finite element space is of dimension D and polynomials of degree r are used in time, the block system has dimension (r + 1)D and is usually regarded as being too large when r > 1. Werder et al. found that the space-time coupling matrices are diagonalizable over inline image for r ⩽100, and this means that the time-coupled computations within a time step can actually be decoupled. By using either continuous Galerkin or spectral element methods in space, we apply this DG-in-time methodology, for the first time, to second-order wave equations including elastodynamics with and without Kelvin–Voigt and Maxwell–Zener viscoelasticity. An example set of numerical results is given to demonstrate the favourable effect on error and computational work of the moderately high-order (up to degree 7) temporal and spatio-temporal approximations, and we also touch on an application of this method to an ambitious problem related to the diagnosis of coronary artery disease

    A fast solver for linear systems with displacement structure

    Full text link
    We describe a fast solver for linear systems with reconstructable Cauchy-like structure, which requires O(rn^2) floating point operations and O(rn) memory locations, where n is the size of the matrix and r its displacement rank. The solver is based on the application of the generalized Schur algorithm to a suitable augmented matrix, under some assumptions on the knots of the Cauchy-like matrix. It includes various pivoting strategies, already discussed in the literature, and a new algorithm, which only requires reconstructability. We have developed a software package, written in Matlab and C-MEX, which provides a robust implementation of the above method. Our package also includes solvers for Toeplitz(+Hankel)-like and Vandermonde-like linear systems, as these structures can be reduced to Cauchy-like by fast and stable transforms. Numerical experiments demonstrate the effectiveness of the software.Comment: 27 pages, 6 figure

    Convex Learning of Multiple Tasks and their Structure

    Get PDF
    Reducing the amount of human supervision is a key problem in machine learning and a natural approach is that of exploiting the relations (structure) among different tasks. This is the idea at the core of multi-task learning. In this context a fundamental question is how to incorporate the tasks structure in the learning problem.We tackle this question by studying a general computational framework that allows to encode a-priori knowledge of the tasks structure in the form of a convex penalty; in this setting a variety of previously proposed methods can be recovered as special cases, including linear and non-linear approaches. Within this framework, we show that tasks and their structure can be efficiently learned considering a convex optimization problem that can be approached by means of block coordinate methods such as alternating minimization and for which we prove convergence to the global minimum.Comment: 26 pages, 1 figure, 2 table
    corecore