395 research outputs found

    Linear Convergence of ISTA and FISTA

    Full text link
    In this paper, we revisit the class of iterative shrinkage-thresholding algorithms (ISTA) for solving the linear inverse problem with sparse representation, which arises in signal and image processing. It is shown in the numerical experiment to deblur an image that the convergence behavior in the logarithmic-scale ordinate tends to be linear instead of logarithmic, approximating to be flat. Making meticulous observations, we find that the previous assumption for the smooth part to be convex weakens the least-square model. Specifically, assuming the smooth part to be strongly convex is more reasonable for the least-square model, even though the image matrix is probably ill-conditioned. Furthermore, we improve the pivotal inequality tighter for composite optimization with the smooth part to be strongly convex instead of general convex, which is first found in [Li et al., 2022]. Based on this pivotal inequality, we generalize the linear convergence to composite optimization in both the objective value and the squared proximal subgradient norm. Meanwhile, we set a simple ill-conditioned matrix which is easy to compute the singular values instead of the original blur matrix. The new numerical experiment shows the proximal generalization of Nesterov's accelerated gradient descent (NAG) for the strongly convex function has a faster linear convergence rate than ISTA. Based on the tighter pivotal inequality, we also generalize the faster linear convergence rate to composite optimization, in both the objective value and the squared proximal subgradient norm, by taking advantage of the well-constructed Lyapunov function with a slight modification and the phase-space representation based on the high-resolution differential equation framework from the implicit-velocity scheme.Comment: 16 pages, 4 figure

    Gradient Norm Minimization of Nesterov Acceleration: o(1/k3)o(1/k^3)

    Full text link
    In the history of first-order algorithms, Nesterov's accelerated gradient descent (NAG) is one of the milestones. However, the cause of the acceleration has been a mystery for a long time. It has not been revealed with the existence of gradient correction until the high-resolution differential equation framework proposed in [Shi et al., 2021]. In this paper, we continue to investigate the acceleration phenomenon. First, we provide a significantly simplified proof based on precise observation and a tighter inequality for LL-smooth functions. Then, a new implicit-velocity high-resolution differential equation framework, as well as the corresponding implicit-velocity version of phase-space representation and Lyapunov function, is proposed to investigate the convergence behavior of the iterative sequence {xk}k=0\{x_k\}_{k=0}^{\infty} of NAG. Furthermore, from two kinds of phase-space representations, we find that the role played by gradient correction is equivalent to that by velocity included implicitly in the gradient, where the only difference comes from the iterative sequence {yk}k=0\{y_{k}\}_{k=0}^{\infty} replaced by {xk}k=0\{x_k\}_{k=0}^{\infty}. Finally, for the open question of whether the gradient norm minimization of NAG has a faster rate o(1/k3)o(1/k^3), we figure out a positive answer with its proof. Meanwhile, a faster rate of objective value minimization o(1/k2)o(1/k^2) is shown for the case r>2r > 2.Comment: 16 page

    Proximal Subgradient Norm Minimization of ISTA and FISTA

    Full text link
    For first-order smooth optimization, the research on the acceleration phenomenon has a long-time history. Until recently, the mechanism leading to acceleration was not successfully uncovered by the gradient correction term and its equivalent implicit-velocity form. Furthermore, based on the high-resolution differential equation framework with the corresponding emerging techniques, phase-space representation and Lyapunov function, the squared gradient norm of Nesterov's accelerated gradient descent (\texttt{NAG}) method at an inverse cubic rate is discovered. However, this result cannot be directly generalized to composite optimization widely used in practice, e.g., the linear inverse problem with sparse representation. In this paper, we meticulously observe a pivotal inequality used in composite optimization about the step size ss and the Lipschitz constant LL and find that it can be improved tighter. We apply the tighter inequality discovered in the well-constructed Lyapunov function and then obtain the proximal subgradient norm minimization by the phase-space representation, regardless of gradient-correction or implicit-velocity. Furthermore, we demonstrate that the squared proximal subgradient norm for the class of iterative shrinkage-thresholding algorithms (ISTA) converges at an inverse square rate, and the squared proximal subgradient norm for the class of faster iterative shrinkage-thresholding algorithms (FISTA) is accelerated to convergence at an inverse cubic rate.Comment: 17 pages, 4 figure

    The thermal evolution of nuclear matter at zero temperature and definite baryon number density in chiral perturbation theory

    Full text link
    The thermal properties of cold dense nuclear matter are investigated with chiral perturbation theory. The evolution curves for the baryon number density, baryon number susceptibility, pressure and the equation of state are obtained. The chiral condensate is calculated and our result shows that when the baryon chemical potential goes beyond 1150MeV1150 \mathrm{MeV}, the absolute value of the quark condensate decreases rapidly, which indicates a tendency of chiral restoration.Comment: 17 pages, 9 figures, revtex

    2,2,2-Trifluoro­ethyl 4-methyl­benzene­sulfonate

    Get PDF
    In the crystal structure of the title compound, C9H9F3O3S, inter­molecular C—H⋯O hydrogen bonds link the mol­ecules along the c-axis direction. Also present are slipped π–π stacking inter­actions between phenyl­ene rings, with perpendicular inter­planar distances of 3.55 (2) Å and centroid–centroid distances of 3.851 (2) Å

    Methyl 2-amino-5-chloro­benzoate

    Get PDF
    The title compound, C8H8ClNO2, is almost planar, with an r.m.s. deviation of 0.0410 Å from the plane through the non-hydrogen atoms. In the crystal structure, inter­molecular N—H⋯O hydrogen bonds link the mol­ecules into chains along the b axis. An intra­molecular N—H⋯O hydrogen bond results in the formation of a six-membered ring

    Ethyl 2-(2-hy­droxy-5-nitro­phen­yl)acetate

    Get PDF
    In the crystal structure of the title compound, C10H11NO5, inter­molecular O—H⋯O hydrogen bonds link the mol­ecules into chains along the b-axis direction. Weak C—H.·O hydrogen bonds also occur

    Methyl 5-chloro-2-[N-(3-eth­oxy­carbonyl­prop­yl)-4-methyl­benzene­sulfonamido]­benzoate

    Get PDF
    In the title compound, C21H24ClNO6S, the benzene rings are oriented at a dihedral angles of 41.6 (2)°. In the crystal structure, weak inter­molecular C—H⋯O inter­actions link the mol­ecules

    Methyl 5-chloro-2-(4-methyl­benzene­sulfonamido)­benzoate

    Get PDF
    In the title compound, C15H14ClNO4S, the benzene rings are oriented at a dihedral angle of 85.42 (1)°. An intra­molecular N—H⋯O hydrogen bond results in the formation of a five-membered ring and an intramolecular C—H⋯O inter­action also occurs

    2-Methyl-4-(2-methyl­benzamido)­benzoic acid

    Get PDF
    In the crystal structure of the title compound, C16H15NO3, inter­molecular N—H⋯O hydrogen bonds link the mol­ecules into chains parallel to the b axis and pairs of inter­molecular O—H⋯O hydrogen bonds between inversion-related carb­oxy­lic acid groups link the mol­ecules into dimers. The dihedral angle between the two benzene rings is 82.4 (2)°
    corecore