172 research outputs found

    Indefinite linearized augmented Lagrangian method for convex programming with linear inequality constraints

    Full text link
    The augmented Lagrangian method (ALM) is a benchmark for tackling the convex optimization problem with linear constraints; ALM and its variants for linearly equality-constrained convex minimization models have been well studied in the literatures. However, much less attention has been paid to ALM for efficiently solving the linearly inequality-constrained convex minimization model. In this paper, we exploit an enlightening reformulation of the most recent indefinite linearized (equality-constrained) ALM, and present a novel indefinite linearized ALM scheme for efficiently solving the convex optimization problem with linear inequality constraints. The proposed method enjoys great advantages, especially for large-scale optimization cases, in two folds mainly: first, it significantly simplifies the optimization of the challenging key subproblem of the classical ALM by employing its linearized reformulation, while keeping low complexity in computation; second, we prove that a smaller proximity regularization term is needed for convergence guarantee, which allows a bigger step-size and can largely reduce required iterations for convergence. Moreover, we establish an elegant global convergence theory of the proposed scheme upon its equivalent compact expression of prediction-correction, along with a worst-case O(1/N)\mathcal{O}(1/N) convergence rate. Numerical results demonstrate that the proposed method can reach a faster converge rate for a higher numerical efficiency as the regularization term turns smaller, which confirms the theoretical results presented in this study

    The extremal unicyclic graphs of the revised edge Szeged index with given diameter

    Full text link
    Let GG be a connected graph. The revised edge Szeged index of GG is defined as Szeβˆ—(G)=βˆ‘e=uv∈E(G)(mu(e∣G)+m0(e∣G)2)(mv(e∣G)+m0(e∣G)2)Sz^{\ast}_{e}(G)=\sum\limits_{e=uv\in E(G)}(m_{u}(e|G)+\frac{m_{0}(e|G)}{2})(m_{v}(e|G)+\frac{m_{0}(e|G)}{2}), where mu(e∣G)m_{u}(e|G) (resp., mv(e∣G)m_{v}(e|G)) is the number of edges whose distance to vertex uu (resp., vv) is smaller than the distance to vertex vv (resp., uu), and m0(e∣G)m_{0}(e|G) is the number of edges equidistant from both ends of ee, respectively. In this paper, the graphs with minimum revised edge Szeged index among all the unicyclic graphs with given diameter are characterized.Comment: arXiv admin note: text overlap with arXiv:1805.0657

    Rethinking the Expressive Power of GNNs via Graph Biconnectivity

    Full text link
    Designing expressive Graph Neural Networks (GNNs) is a central topic in learning graph-structured data. While numerous approaches have been proposed to improve GNNs in terms of the Weisfeiler-Lehman (WL) test, generally there is still a lack of deep understanding of what additional power they can systematically and provably gain. In this paper, we take a fundamentally different perspective to study the expressive power of GNNs beyond the WL test. Specifically, we introduce a novel class of expressivity metrics via graph biconnectivity and highlight their importance in both theory and practice. As biconnectivity can be easily calculated using simple algorithms that have linear computational costs, it is natural to expect that popular GNNs can learn it easily as well. However, after a thorough review of prior GNN architectures, we surprisingly find that most of them are not expressive for any of these metrics. The only exception is the ESAN framework (Bevilacqua et al., 2022), for which we give a theoretical justification of its power. We proceed to introduce a principled and more efficient approach, called the Generalized Distance Weisfeiler-Lehman (GD-WL), which is provably expressive for all biconnectivity metrics. Practically, we show GD-WL can be implemented by a Transformer-like architecture that preserves expressiveness and enjoys full parallelizability. A set of experiments on both synthetic and real datasets demonstrates that our approach can consistently outperform prior GNN architectures.Comment: ICLR 2023 notable top-5%; 58 pages, 11 figure
    • …
    corecore