In this work, we investigate the problem of planning stable grasps for object manipulations using an 18-DOF robotic hand with four fingers. The main challenge here is the high-dimensional search space, and we address this problem using a novel two-stage learning process. In the first stage, we train an autoregressive network called the hand-pose-generator, which learns to generate a distribution of valid 6D poses of the palm for a given volumetric object representation. In the second stage, we employ a network that regresses 12D finger positions and scalar grasp qualities from given object representations and palm poses. To train our networks, we use synthetic training data generated by a novel grasp planning algorithm, which also proceeds stage-wise: first the palm pose, then the finger positions. Here, we devise a Bayesian Optimization scheme for the palm pose and a physics-based grasp pose metric to rate stable grasps. In experiments on the YCB benchmark data set, we show a grasp success rate of over 83%, as well as qualitative results on real scenarios of grasping unknown objects