In the age of the Internet and cloud-based applications, image compression has become increasingly important. Moreover, image processing has recently sparked the interest of technology companies as autonomous machines powered by artificial intelligence using images as input are rapidly growing. Reducing the amount of information needed to represent an image is key to reducing the amount of storage space, transmission bandwidth, and computation time required to process the image, which in turn saves resources, energy, and money. This study aims to investigate machine learning techniques (Fourier, wavelets, and PCA) for image compression. Several Fourier and wavelet methods are included, such as the wellknown Cooley-Tukey algorithm, the discrete cosine transform, and the Mallart algorithm, among others. To comprehend each step of image compression, an object-oriented Matlab code has been developed in-house. To do so, extensive research in machine learning techniques, not only in terms of theoretical understanding, but also in the mathematics that support it. The developed code is used to compare the performance of the different compression techniques studied. The findings of this study are consistent with the advances in image compression technologies in recent years, with the dominance of the JPEG compression method (Fourier) and later JPEG2000 (wavelets) reigning supreme