Scaling laws have allowed Pre-trained Language Models (PLMs) into the field
of causal reasoning. Causal reasoning of PLM relies solely on text-based
descriptions, in contrast to causal discovery which aims to determine the
causal relationships between variables utilizing data. Recently, there has been
current research regarding a method that mimics causal discovery by aggregating
the outcomes of repetitive causal reasoning, achieved through specifically
designed prompts. It highlights the usefulness of PLMs in discovering cause and
effect, which is often limited by a lack of data, especially when dealing with
multiple variables. Conversely, the characteristics of PLMs which are that PLMs
do not analyze data and they are highly dependent on prompt design leads to a
crucial limitation for directly using PLMs in causal discovery. Accordingly,
PLM-based causal reasoning deeply depends on the prompt design and carries out
the risk of overconfidence and false predictions in determining causal
relationships. In this paper, we empirically demonstrate the aforementioned
limitations of PLM-based causal reasoning through experiments on
physics-inspired synthetic data. Then, we propose a new framework that
integrates prior knowledge obtained from PLM with a causal discovery algorithm.
This is accomplished by initializing an adjacency matrix for causal discovery
and incorporating regularization using prior knowledge. Our proposed framework
not only demonstrates improved performance through the integration of PLM and
causal discovery but also suggests how to leverage PLM-extracted prior
knowledge with existing causal discovery algorithms