In the field of artificial intelligence, supervised machine learning enables us to try to develop automatic recognition systems. In music information retrieval, training and testing such systems is possible with a variety of music datasets. Two key prediction tasks are those of music genre recognition, and of music mood recognition. The focus of this study is to evaluate the classification of music into genres and mood categories from the audio content. To this end, we evaluate five novel spectro-temporal variants of sub-band musical features. These features are, sub-band entropy, sub-band flux, sub-band kurtosis, sub-band skewness and sub-band zero crossing rate. The choice of features is based on previous studies that highlight the potential efficacy of sub-band features. To aid our analysis we include the Mel-Frequency Cepstral Coefficients feature as our baseline approach. The classification performances are obtained with various learning algorithms, distinct datasets and multiple feature selection subsets. In order to create and evaluate models in both tasks, we use two music datasets prelabelled with regards to, music genres (GTZAN) and music mood (PandaMood) respectively. In addition, this study is the first to develop an adaptive window decomposition method for these sub-band features and one of a handful few that uses artist filtering and fault filtering for the GTZAN dataset. Our results show that the vast majority of sub-band features outperformed the MFCCs in the music genre and the music mood tasks. Between individual features, sub-band entropy outperformed and outranked every feature in both tasks and feature selection approaches. Lastly, we find lower overfitting tendencies for sub-band features in comparison to the MFCCs. In summary, this study gives support to the use of these sub-band features for music genre and music mood classification tasks and further suggests uses in other content-based predictive tasks