AI planning research typically assumes that complete action models are given. On the other hand, popular approaches in reinforcement learning such as Q-learning completely eschew models and planning. Neither of these approaches is satisfactory to achieve robust human-level AI that includes planning and learning in rich structured domains. In this paper, we introduce the idea of planning with partial models. While complete action models may be exponentially large, some domains may still have polynomial size partial models which are adequate for hierarchical planning. We describe algorithms for planning with partial models in the context of serializable domains, and for learning them from observation. Empirically, we demonstrate the effectiveness of partial models for learning and hierarchical planning in versions of the taxi domain.