The solution of a sparse system of linear equations is ubiquitous in
scientific applications. Iterative methods, such as the Preconditioned
Conjugate Gradient method (PCG), are normally chosen over direct methods due to
memory and computational complexity constraints. However, the efficiency of
these methods depends on the preconditioner utilized. The development of the
preconditioner normally requires some insight into the sparse linear system and
the desired trade-off of generating the preconditioner and the reduction in the
number of iterations. Incomplete factorization methods tend to be black box
methods to generate these preconditioners but may fail for a number of reasons.
These reasons include numerical issues that require searching for adequate
scaling, shifting, and fill-in while utilizing a difficult to parallelize
algorithm. With a move towards heterogeneous computing, many sparse
applications find GPUs that are optimized for dense tensor applications like
training neural networks being underutilized. In this work, we demonstrate that
a simple artificial neural network trained either at compile time or in
parallel to the running application on a GPU can provide an incomplete sparse
Cholesky factorization that can be used as a preconditioner. This generated
preconditioner is as good or better in terms of reduction of iterations than
the one found using multiple preconditioning techniques such as scaling and
shifting. Moreover, the generated method also works and never fails to produce
a preconditioner that does not reduce the iteration count