2 research outputs found

    Parallel asynchronous lock-free algorithms for nonconvex big-data optimization

    No full text
    We propose a novel parallel asynchronous lock-free algorithmic framework for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer. This class of problems arises in many big-data applications, including deep learning, matrix completions, and tensor factorization. Key features of the proposed algorithm are: i) it deals with nonconvex objective functions; ii) it is parallel and asynchronous; and iii) it is lock-free, meaning that components of the vector variables may be written by some cores while being simultaneously read by others. Almost sure convergence to stationary solutions is proved. The method enjoys properties that improve to a great extent over current ones and numerical results show that it outperforms existing asynchronous algorithms on both convex and nonconvex problems
    corecore