Graph learning from signals is a core task in Graph Signal Processing (GSP).
One of the most commonly used models to learn graphs from stationary signals is
SpecT. However, its practical formulation rSpecT is known to be sensitive to
hyperparameter selection and, even worse, to suffer from infeasibility. In this
paper, we give the first condition that guarantees the infeasibility of rSpecT
and design a novel model (LogSpecT) and its practical formulation (rLogSpecT)
to overcome this issue. Contrary to rSpecT, the novel practical model rLogSpecT
is always feasible. Furthermore, we provide recovery guarantees of rLogSpecT,
which are derived from modern optimization tools related to epi-convergence.
These tools could be of independent interest and significant for various
learning problems. To demonstrate the advantages of rLogSpecT in practice, a
highly efficient algorithm based on the linearized alternating direction method
of multipliers (L-ADMM) is proposed. The subproblems of L-ADMM admit
closed-form solutions and the convergence is guaranteed. Extensive numerical
results on both synthetic and real networks corroborate the stability and
superiority of our proposed methods, underscoring their potential for various
graph learning applications