1,648 research outputs found
Globally Convergent Accelerated Algorithms for Multilinear Sparse Logistic Regression with -constraints
Tensor data represents a multidimensional array. Regression methods based on
low-rank tensor decomposition leverage structural information to reduce the
parameter count. Multilinear logistic regression serves as a powerful tool for
the analysis of multidimensional data. To improve its efficacy and
interpretability, we present a Multilinear Sparse Logistic Regression model
with -constraints (-MLSR). In contrast to the -norm and
-norm, the -norm constraint is better suited for feature
selection. However, due to its nonconvex and nonsmooth properties, solving it
is challenging and convergence guarantees are lacking. Additionally, the
multilinear operation in -MLSR also brings non-convexity. To tackle
these challenges, we propose an Accelerated Proximal Alternating Linearized
Minimization with Adaptive Momentum (APALM) method to solve the
-MLSR model. We provide a proof that APALM can ensure the
convergence of the objective function of -MLSR. We also demonstrate
that APALM is globally convergent to a first-order critical point as well
as establish convergence rate by using the Kurdyka-Lojasiewicz property.
Empirical results obtained from synthetic and real-world datasets validate the
superior performance of our algorithm in terms of both accuracy and speed
compared to other state-of-the-art methods.Comment: arXiv admin note: text overlap with arXiv:2308.1212
An Accelerated Block Proximal Framework with Adaptive Momentum for Nonconvex and Nonsmooth Optimization
We propose an accelerated block proximal linear framework with adaptive
momentum (ABPL) for nonconvex and nonsmooth optimization. We analyze the
potential causes of the extrapolation step failing in some algorithms, and
resolve this issue by enhancing the comparison process that evaluates the
trade-off between the proximal gradient step and the linear extrapolation step
in our algorithm. Furthermore, we extends our algorithm to any scenario
involving updating block variables with positive integers, allowing each cycle
to randomly shuffle the update order of the variable blocks. Additionally,
under mild assumptions, we prove that ABPL can monotonically decrease the
function value without strictly restricting the extrapolation parameters and
step size, demonstrates the viability and effectiveness of updating these
blocks in a random order, and we also more obviously and intuitively
demonstrate that the derivative set of the sequence generated by our algorithm
is a critical point set. Moreover, we demonstrate the global convergence as
well as the linear and sublinear convergence rates of our algorithm by
utilizing the Kurdyka-Lojasiewicz (K{\L}) condition. To enhance the
effectiveness and flexibility of our algorithm, we also expand the study to the
imprecise version of our algorithm and construct an adaptive extrapolation
parameter strategy, which improving its overall performance. We apply our
algorithm to multiple non-negative matrix factorization with the norm,
nonnegative tensor decomposition with the norm, and perform extensive
numerical experiments to validate its effectiveness and efficiency
Semiconductor nanowires for future nanoscale application: Synthesis, characterization and nanoelectronic devices
Ph.DDOCTOR OF PHILOSOPH
Spaces to Bloch-Type Spaces
We study the boundedness and compactness of the products of composition and differentiation operators from Q K p, q spaces to Bloch-type spaces and little Bloch-type spaces
Spaces to Bloch-type Spaces
Abstract. Let H(B) denote the space of all holomorphic functions on the unit ball B ⊂ C n . Let ϕ be a holomorphic self-map of B and g ∈ H(B). In this paper, we investigate the boundedness and compactness of the Volterra composition operator which map from general function space F (p, q, s) to Bloch-type space B α in the unit ball
Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control
Recent progress on physics-based character animation has shown impressive
breakthroughs on human motion synthesis, through imitating motion capture data
via deep reinforcement learning. However, results have mostly been demonstrated
on imitating a single distinct motion pattern, and do not generalize to
interactive tasks that require flexible motion patterns due to varying
human-object spatial configurations. To bridge this gap, we focus on one class
of interactive tasks -- sitting onto a chair. We propose a hierarchical
reinforcement learning framework which relies on a collection of subtask
controllers trained to imitate simple, reusable mocap motions, and a meta
controller trained to execute the subtasks properly to complete the main task.
We experimentally demonstrate the strength of our approach over different
non-hierarchical and hierarchical baselines. We also show that our approach can
be applied to motion prediction given an image input. A supplementary video can
be found at https://youtu.be/3CeN0OGz2cA.Comment: Accepted to AAAI 202
- …