Anomaly detectors are widely used in industrial production to detect and
localize unknown defects in query images. These detectors are trained on
nominal images and have shown success in distinguishing anomalies from most
normal samples. However, hard-nominal examples are scattered and far apart from
most normalities, they are often mistaken for anomalies by existing anomaly
detectors. To address this problem, we propose a simple yet efficient method:
\textbf{H}ard Nominal \textbf{E}xample-aware \textbf{T}emplate \textbf{M}utual
\textbf{M}atching (HETMM). Specifically, \textit{HETMM} aims to construct a
robust prototype-based decision boundary, which can precisely distinguish
between hard-nominal examples and anomalies, yielding fewer false-positive and
missed-detection rates. Moreover, \textit{HETMM} mutually explores the
anomalies in two directions between queries and the template set, and thus it
is capable to capture the logical anomalies. This is a significant advantage
over most anomaly detectors that frequently fail to detect logical anomalies.
Additionally, to meet the speed-accuracy demands, we further propose
\textbf{P}ixel-level \textbf{T}emplate \textbf{S}election (PTS) to streamline
the original template set. \textit{PTS} selects cluster centres and
hard-nominal examples to form a tiny set, maintaining the original decision
boundaries. Comprehensive experiments on five real-world datasets demonstrate
that our methods yield outperformance than existing advances under the
real-time inference speed. Furthermore, \textit{HETMM} can be hot-updated by
inserting novel samples, which may promptly address some incremental learning
issues