Here we examine the paperclip apocalypse concern for artificial general
intelligence (or AGI) whereby a superintelligent AI with a simple goal (ie.,
producing paperclips) accumulates power so that all resources are devoted
towards that simple goal and are unavailable for any other use. We provide
conditions under which a paper apocalypse can arise but also show that, under
certain architectures for recursive self-improvement of AIs, that a paperclip
AI may refrain from allowing power capabilities to be developed. The reason is
that such developments pose the same control problem for the AI as they do for
humans (over AIs) and hence, threaten to deprive it of resources for its
primary goal