In numerous real world applications, from sensor networks to patient monitoring to intelligent buildings, probabilistic inference is necessary to make conclusions about the system in question in the face of uncertainty. The key problem in all those settings is to compute the probability distribution over some random variables of interest (the query) given the known values of other random variables (the evidence). Probabilistic graphical models (PGMs) have become the approach of choice for representing and reasoning with probability distributions. This thesis proposes algorithms for learning probabilistic graphical models and approximate inference in PGMs that aim to improve the quality of answering the queries by exploiting the information about the query variables and the evidence assignment more fully than the existing approaches. The contributions of this thesis fall into three categories. First, we propose a polynomial time algorithm for learning the structure of graphical models that guarantees both the approximation quality of the resulting model and the fact that the resulting model admits efficient exact inference. Ours is the first efficient algorithm to provide this type of guarantees.