Developments in deep learning with ANNs (Artificial Neural Networks) are paving the way for revolutionizing all application areas, especially related to non-linear regression and classification problems of predictive modelling and forecasting. Although their explainability is more complicated and challenging, deep neural networks are preferred over conventional machine learning methods for high accuracy in non-linear and complex problems. However, machine learning and data science practitioners often use ANN like a black-box. The present article concisely overviews the mathematics and computations involved in simple feed-forward neural networks (FNNs) or multilayer perceptrons (MLPs). The purpose is to spot light on what deep neural networksโ learning (or training) is and how it works. The article includes simplified derivations of the expressions for the main workhorse of neural networks (the backpropagation) and an example to explain how it works with graphical insights. An algorithm for a basic ANN application is presented in both component-form and matrix-form, together with a detailed note on the relevant data structures, to elaborate the scheme comprehensively. Python implementation of the basic algorithm is presented, and its performance results are compared with those produced using the TensorFlow library functions that implement the neural networks. The article discusses various techniques to improve the generalization capability of neural networks and how to address various training challenges. Finally, some well-established optimization approaches based on the Gradient Descent method are also discussed. The article may serve as a comprehensive premiere for a sound understanding of deep learning for undergraduate and graduate students before indulging in the relevant industry practices so that they can step into sustainable progress in the field