Artificial intelligence (AI) has the potential to greatly improve society,
but as with any powerful technology, it comes with heightened risks and
responsibilities. Current AI research lacks a systematic discussion of how to
manage long-tail risks from AI systems, including speculative long-term risks.
Keeping in mind the potential benefits of AI, there is some concern that
building ever more intelligent and powerful AI systems could eventually result
in systems that are more powerful than us; some say this is like playing with
fire and speculate that this could create existential risks (x-risks). To add
precision and ground these discussions, we provide a guide for how to analyze
AI x-risk, which consists of three parts: First, we review how systems can be
made safer today, drawing on time-tested concepts from hazard analysis and
systems safety that have been designed to steer large processes in safer
directions. Next, we discuss strategies for having long-term impacts on the
safety of future systems. Finally, we discuss a crucial concept in making AI
systems safer by improving the balance between safety and general capabilities.
We hope this document and the presented concepts and tools serve as a useful
guide for understanding how to analyze AI x-risk