Large language models are meticulously aligned to be both helpful and
harmless. However, recent research points to a potential overkill which means
models may refuse to answer benign queries. In this paper, we investigate the
factors for overkill by exploring how models handle and determine the safety of
queries. Our findings reveal the presence of shortcuts within models, leading
to an over-attention of harmful words like 'kill' and prompts emphasizing
safety will exacerbate overkill. Based on these insights, we introduce
Self-Contrastive Decoding (Self-CD), a training-free and model-agnostic
strategy, to alleviate this phenomenon. We first extract such over-attention by
amplifying the difference in the model's output distributions when responding
to system prompts that either include or omit an emphasis on safety. Then we
determine the final next-token predictions by downplaying the over-attention
from the model via contrastive decoding. Empirical results indicate that our
method has achieved an average reduction of the refusal rate by 20\% while
having almost no impact on safety