Bayesian inference and decision making requires elici1:ation of prior probabilities and sampling distributions. In many applica~tions such as exploratory data analysis, however, it may not be possible to construct the prior probabilities or the sampling distributions precisely. The objective of this thesis is to address the issues and provide some solutions to the problem of inference and decision making with imprecise or partially known priors and sampling distributions. More specifically, we will address the following three interrelated problems:( 1) how to describe imprecise priors and sampling distributions, (2) how to proceed from approximate priors and sampling distributions to approximate posteriors and posterior related quantities, and (3) how to make decisions with imprecise posterior probabilities. When the priors and/or sampling distributions are not known precisely, a natural approach is to consider a class or a neighborhood of priors, and classes or collections of sampling distributions. This approach leads naturally to consideration of upper and lower probabilities or interval-valuedl probabilities. We examine the various approaches to representation of imprecision in priors and sampling distributions. We realize that many useful classes, either for the priors or for the sampling distributions, are conveniently described in terms of 2- Choquet Capacities. We prove the Bayes\u27 Theorem (or Conditioning) for the 2-Choquet Capacity classes. Since the classes of imprecise probabilities described by the Dempster-Shafer Theory are β-Choquet Capacities (and therefore 2-Choquet Capacities) our result provides another proof of the inconsistency of the Dempster\u27s rule. We address the problem of combination of various sources of information and the requirements for a reasonable combination rule. Here, we also examine the issues of independence of sources of information which is a crucial issue in combining various sources of information. We consider three methods to combine imprecise information. In method one, we utilizes thle extreme-point representations of the imprecise priors and/or the sampling distributions to obtain the extreme-points of the class of posteriors. This method is usually computationally very demanding. Therefore, we propose a simple iterative procedure that allows direct computation of not only the posterior probabilities, but also many useful posterior related quantities such as the posterior mean, the predictive density that the next observation would lie in a given set, the posterior expected loss of a decision or an action, etc. Finally,, by considering the joint space of observations and parameters, we show that if this class of joint probabilities is a 2-Choquet capacity class, we can utilize our Bayes\u27 Theorem found earlier to obtain the posterior probabilities. This last approach is computationally the most efficient method. Finally, we address the problem of decision making with imprecise posteriors obtained from imprecise priors and sampling distributions. Even ,though, allowing imprecision is a natural approach for representation of lack of information, it sometimes leads to complications in decision making and even indeterminacies. We suggest a few ad-hoc rules to resolve the remaining indeterminacies. The ultimate solution in such cases is to simply gather more data