This document aims to demonstrate that features generated as a side effect of using Proportional-Integral filters, with explicit or implicit Euler discretization, could be viable for imaging Machine Learning applications.
This thesis delves into how the Proportional-Integral and Super-Twisting filters are defined, how they are discretized with the Forward and Backward Euler method, which results in the need to calculate an error or difference scaled with a step size (discrete derivative), and how these features are used in an image classification problem with two different datasets, using a convex machine learning method (Support Vector Classifier). These results are compared with a commonly used kernel for extracting an image's differences or edges, called the Sobel operator.
With the PI and ST robustness and stability in mind and their implicit low-pass filter from their formulation as a continuous output signal, two tests involving adding artificial uniform and Gaussian noise are realized. The features derived from the automatic control algorithms prove to be viable but more valuable in situations where noise exists