Automated surgical margin assessment in breast conserving surgery using SFDI with ensembles of self-confident deep convolutional networks

Abstract

With an adequate tissue dataset, supervised classification of tissue optical properties can be achieved in SFDI images of breast cancer lumpectomies with deep convolutional networks. Nevertheless, the use of a black-box classifier in current ex vivo setups provides output diagnostic images that are inevitably bound to show misclassified areas due to inter- and intra-patient variability that could potentially be misinterpreted in a real clinical setting. This work proposes the use of a novel architecture, the self-introspective classifier, where part of the model is dedicated to estimating its own expected classification error. The model can be used to generate metrics of self-confidence for a given classification problem, which can then be employed to show how much the network is familiar with the new incoming data. A heterogenous ensemble of four deep convolutional models with self-confidence, each sensitive to a different spatial scale of features, is tested on a cohort of 70 specimens, achieving a global leave-one-out cross-validation accuracy of up to 81%, while being able to explain where in the output classification image the system is most confident.Spanish Ministry of Science, Innovation and Universities (FIS2010-19860, TEC2016-76021-C2-2-R), Spanish Ministry of Economy, Industry and Competitiveness and Instituto de Salud Carlos III (DTS17-00055, DTS15- 00238), Instituto de Investigación Valdecilla (INNVAL16/02, INNVAL18/23), Spanish Ministry of Education, Culture, and Sports (FPU16/05705)

    Similar works