The goal of Causal Discovery is to find automated search methods for learning
causal structures from observational data. In some cases all variables of the
interested causal mechanism are measured, and the task is to predict the
effects one measured variable has on another. In contrast, sometimes the
variables of primary interest are not directly observable but instead inferred
from their manifestations in the data. These are referred to as latent
variables. One commonly known example is the psychological construct of
intelligence, which cannot directly measured so researchers try to assess
through various indicators such as IQ tests. In this case, casual discovery
algorithms can uncover underlying patterns and structures to reveal the causal
connections between the latent variables and between the latent and observed
variables. This thesis focuses on two questions in causal discovery: providing
an alternative definition of k-Triangle Faithfulness that (i) is weaker than
strong faithfulness when applied to the Gaussian family of distributions, (ii)
can be applied to non-Gaussian families of distributions, and (iii) under the
assumption that the modified version of Strong Faithfulness holds, can be used
to show the uniform consistency of a modified causal discovery algorithm;
relaxing the sufficiency assumption to learn causal structures with latent
variables. Given the importance of inferring cause-and-effect relationships for
understanding and forecasting complex systems, the work in this thesis of
relaxing various simplification assumptions is expected to extend the causal
discovery method to be applicable in a wider range with diversified causal
mechanism and statistical phenomena