2,285 research outputs found

    Beyond graph energy: norms of graphs and matrices

    Full text link
    In 1978 Gutman introduced the energy of a graph as the sum of the absolute values of graph eigenvalues, and ever since then graph energy has been intensively studied. Since graph energy is the trace norm of the adjacency matrix, matrix norms provide a natural background for its study. Thus, this paper surveys research on matrix norms that aims to expand and advance the study of graph energy. The focus is exclusively on the Ky Fan and the Schatten norms, both generalizing and enriching the trace norm. As it turns out, the study of extremal properties of these norms leads to numerous analytic problems with deep roots in combinatorics. The survey brings to the fore the exceptional role of Hadamard matrices, conference matrices, and conference graphs in matrix norms. In addition, a vast new matrix class is studied, a relaxation of symmetric Hadamard matrices. The survey presents solutions to just a fraction of a larger body of similar problems bonding analysis to combinatorics. Thus, open problems and questions are raised to outline topics for further investigation.Comment: 54 pages. V2 fixes many typos, and gives some new materia

    The algorithm by Ferson et al. is surprisingly fast: An NP-hard optimization problem solvable in almost linear time with high probability

    Full text link
    We start with the algorithm of Ferson et al. (\emph{Reliable computing} {\bf 11}(3), p.~207--233, 2005), designed for solving a certain NP-hard problem motivated by robust statistics. First, we propose an efficient implementation of the algorithm and improve its complexity bound to O(nlogn+n2ω)O(n \log n+n\cdot 2^\omega), where ω\omega is the clique number in a certain intersection graph. Then we treat input data as random variables (as it is usual in statistics) and introduce a natural probabilistic data generating model. On average, we get 2ω=O(n1/loglogn)2^\omega = O(n^{1/\log\log n}) and ω=O(logn/loglogn)\omega = O(\log n / \log\log n). This results in average computing time O(n1+ϵ)O(n^{1+\epsilon}) for ϵ>0\epsilon > 0 arbitrarily small, which may be considered as ``surprisingly good'' average time complexity for solving an NP-hard problem. Moreover, we prove the following tail bound on the distribution of computation time: ``hard'' instances, forcing the algorithm to compute in time 2Ω(n)2^{\Omega(n)}, occur rarely, with probability tending to zero faster than exponentially with nn \rightarrow \infty
    corecore