Multi-modal neuroimaging, where several high-dimensional imaging variables are collected, has enabled the visualization and analysis of the brain structure and function in unprecedented detail. Due to methodological and computational challenges, the vast number of imaging studies evaluate data from each modality separately and do not consider information encoded in the relationships between imaging types. In this work, we propose methods that quantify the complex relationships between multiple imaging modalities and map how these relationships vary spatially across different anatomical regions of the brain. In order to understand relationships between several high-dimensional imaging variables, we use novel multi-modal image analysis techniques for feature development and image fusion in conjunction with machine learning techniques to develop automatic approaches for multiple sclerosis lesion detection. Additionally, we use multi-modal image analysis to understand the association between high-dimensional imaging variables with phenotypes of interest to investigate structure-function relationships in development, aging, and pathology of the brain. We find that by leveraging the relationship between imaging modalities, we can more accurately detect neuropathology and delineate brain trajectories to provide complementary characterizations of healthy development. We provide publicly available R packages to allow easy access and implemention of the proposed methods in new data and contexts