A fundamental problem in intrusion detection is what metric(s) can be used to objectively evaluate an
intrusion detection system (IDS) in terms of its ability to correctly classify events as normal or intrusion. In
this paper, we provide an in-depth analysis of existing metrics. We argue that the lack of a single unified
metric makes it difficult to fine tune and evaluate an IDS. The intrusion detection process can be examined
from an information-theoretic point of view. Intuitively, we should have less uncertainty about the input
(event data) given the IDS output (alarm data). We thus propose a new metric called Intrusion Detection
Capability, C[subscript ID], which is simply the ratio of the mutual information between IDS input and output, and the
entropy of the input. C[subscript ID] has the desired property that: (1) it takes into account all the important aspects
of detection capability naturally, i.e., true positive rate, false positive rate, positive predictive value, negative
predictive value, and base rate; (2) it objectively provide an intrinsic measure of intrusion detection capability;
(3) it is sensitive to IDS operation parameters. We propose that C[subscript ID] is the appropriate performance measure
to maximize when fine tuning an IDS. The thus obtained operation point is the best that can be achieved by the
IDS in terms of its intrinsic ability to classify input data. We use numerical examples as well as experiments
of actual IDSs on various datasets to show that using C[subscript ID], we can choose the best (optimal) operating point
for an IDS, and can objectively compare different IDSs