1,648 research outputs found

    Globally Convergent Accelerated Algorithms for Multilinear Sparse Logistic Regression with 0\ell_0-constraints

    Full text link
    Tensor data represents a multidimensional array. Regression methods based on low-rank tensor decomposition leverage structural information to reduce the parameter count. Multilinear logistic regression serves as a powerful tool for the analysis of multidimensional data. To improve its efficacy and interpretability, we present a Multilinear Sparse Logistic Regression model with 0\ell_0-constraints (0\ell_0-MLSR). In contrast to the 1\ell_1-norm and 2\ell_2-norm, the 0\ell_0-norm constraint is better suited for feature selection. However, due to its nonconvex and nonsmooth properties, solving it is challenging and convergence guarantees are lacking. Additionally, the multilinear operation in 0\ell_0-MLSR also brings non-convexity. To tackle these challenges, we propose an Accelerated Proximal Alternating Linearized Minimization with Adaptive Momentum (APALM+^+) method to solve the 0\ell_0-MLSR model. We provide a proof that APALM+^+ can ensure the convergence of the objective function of 0\ell_0-MLSR. We also demonstrate that APALM+^+ is globally convergent to a first-order critical point as well as establish convergence rate by using the Kurdyka-Lojasiewicz property. Empirical results obtained from synthetic and real-world datasets validate the superior performance of our algorithm in terms of both accuracy and speed compared to other state-of-the-art methods.Comment: arXiv admin note: text overlap with arXiv:2308.1212

    An Accelerated Block Proximal Framework with Adaptive Momentum for Nonconvex and Nonsmooth Optimization

    Full text link
    We propose an accelerated block proximal linear framework with adaptive momentum (ABPL+^+) for nonconvex and nonsmooth optimization. We analyze the potential causes of the extrapolation step failing in some algorithms, and resolve this issue by enhancing the comparison process that evaluates the trade-off between the proximal gradient step and the linear extrapolation step in our algorithm. Furthermore, we extends our algorithm to any scenario involving updating block variables with positive integers, allowing each cycle to randomly shuffle the update order of the variable blocks. Additionally, under mild assumptions, we prove that ABPL+^+ can monotonically decrease the function value without strictly restricting the extrapolation parameters and step size, demonstrates the viability and effectiveness of updating these blocks in a random order, and we also more obviously and intuitively demonstrate that the derivative set of the sequence generated by our algorithm is a critical point set. Moreover, we demonstrate the global convergence as well as the linear and sublinear convergence rates of our algorithm by utilizing the Kurdyka-Lojasiewicz (K{\L}) condition. To enhance the effectiveness and flexibility of our algorithm, we also expand the study to the imprecise version of our algorithm and construct an adaptive extrapolation parameter strategy, which improving its overall performance. We apply our algorithm to multiple non-negative matrix factorization with the 0\ell_0 norm, nonnegative tensor decomposition with the 0\ell_0 norm, and perform extensive numerical experiments to validate its effectiveness and efficiency

    Spaces to Bloch-Type Spaces

    Get PDF
    We study the boundedness and compactness of the products of composition and differentiation operators from Q K p, q spaces to Bloch-type spaces and little Bloch-type spaces

    Spaces to Bloch-type Spaces

    Get PDF
    Abstract. Let H(B) denote the space of all holomorphic functions on the unit ball B ⊂ C n . Let ϕ be a holomorphic self-map of B and g ∈ H(B). In this paper, we investigate the boundedness and compactness of the Volterra composition operator which map from general function space F (p, q, s) to Bloch-type space B α in the unit ball

    Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control

    Full text link
    Recent progress on physics-based character animation has shown impressive breakthroughs on human motion synthesis, through imitating motion capture data via deep reinforcement learning. However, results have mostly been demonstrated on imitating a single distinct motion pattern, and do not generalize to interactive tasks that require flexible motion patterns due to varying human-object spatial configurations. To bridge this gap, we focus on one class of interactive tasks -- sitting onto a chair. We propose a hierarchical reinforcement learning framework which relies on a collection of subtask controllers trained to imitate simple, reusable mocap motions, and a meta controller trained to execute the subtasks properly to complete the main task. We experimentally demonstrate the strength of our approach over different non-hierarchical and hierarchical baselines. We also show that our approach can be applied to motion prediction given an image input. A supplementary video can be found at https://youtu.be/3CeN0OGz2cA.Comment: Accepted to AAAI 202
    corecore