Machine learning has become an essential part of medical imaging research. For example, convolutional neural networks (CNNs) are used to perform brain tumor segmentation, which is the process of distinguishing between tumoral and healthy cells. This task is often carried out using four different magnetic resonance imaging (MRI) scans of the patient. Due to the cost and effort required to produce the scans, oftentimes one of the four scans is missing, making the segmentation process more tedious. To obviate this problem, we propose two MRI-to-MRI translation approaches that synthesize an approximation of the missing image from an existing one. In particular, we focus on creating the missing T2 Weighted sequence from a given T1 Weighted sequence. We investigate clustering as a solution to this problem and propose BrainClustering, a learning method that creates approximation tables that can be queried to retrieve the missing image. The images are clustered with hierarchical clustering methods to identify the main tissues of the brain, but also to capture the different signal intensities in local areas. We compare this method to the general image-to-image translation tool Pix2Pix, which we extend to fit our purposes. Finally, we assess the quality of the approximated solutions by evaluating the tumor segmentations that can be achieved using the synthesized outputs. Pix2Pix achieves the most realistic approximations, but the tumor areas are too generalized to compute optimal tumor segmentations. BrainClustering obtains transformations that deviate more from the original image but still provide better segmentations in terms of Hausdorff distance and Dice score. Surprisingly, using the complement of T1 Weighted (i.e. inverting the color of each pixel) also achieves good results. Our new methods make segmentation software more feasible in practice by allowing the software to utilize all four MRI scans, even if one of the scans is missing