Deep-learning models promisingly benefit precision medicine in automatically solving phenotype prediction tasks on high-throughout omic data. However, their lack of interpretability limits their development in healthcare. Some studies are leveraging high-level human comprehensible biological concepts to increase the interpretability of these models, but interpretability is still not direct, and managing different knowledge types is limited. We propose BioHAN, a heterogeneous and selfexplaining graph neural network, using a self-attention mechanism. The heterogeneous input graph has a central gene graph and auxiliary graphs that compensate for the sparsity of the central graph. Experiments on a real dataset show that BioHAN has similar accuracy to the non-interpretable state-of-the-art and provides automatic explanations by listing the most relevant genes and identifying the most important concept-based neighbors of these genes. All these features should make BioHAN a functional tool to clinicians