A major challenge in the safety assessment of automated vehicles is to ensure
that risk for all traffic participants is as low as possible. A concept that is
becoming increasingly popular for testing in automated driving is
scenario-based testing. It is founded on the assumption that most time on the
road can be seen as uncritical and in mainly critical situations contribute to
the safety case. Metrics describing the criticality are necessary to
automatically identify the critical situations and scenarios from measurement
data. However, established metrics lack universality or a concept for metric
combination. In this work, we present a multidimensional evaluation model that,
based on conventional metrics, can evaluate scenes independently of the scene
type. Furthermore, we present two new, further enhanced evaluation approaches,
which can additionally serve as universal metrics. The metrics we introduce are
then evaluated and discussed using real data from a motion dataset