Privacy-preserving biometric template protection schemes (BTPs) preserve biometric data by hiding biometric representations via a privacy-preserving mechanism (such as homomorphic encryption) and comparing the protected templates while conserving the recognition scores as in an embedding space. However, it is often tolerated to reveal these scores after performing a biometric comparison to gain efficiency and perform the score comparison directly on cleartext data. Through this work, we demonstrate that this cleartext score tolerance can lead to privacy breaches and bypass recognition systems, threatening those BTPs in the case of inner product-based facial template comparisons. We propose a template recovery attack that requires no training and a few random fake templates with their corresponding scores, from which we are able to recover the unprotected target template using the Lagrange multiplier optimization method. We evaluate our attack by verifying whether the recovered template is deemed similar to the target template held by recognition systems set to accept 0.1%, 0.01%, and 0.001% FMR. We estimate that between 60 to 165 revealed scores and fake templates can lead to a template recovery with a 100% success rate. We analyzed the impact of recovered templates by measuring the amount of gender information they contain, as well as their resemblance to the reconstructed images of their target templates