Location of Repository

Can’t We Just Talk? Commentary on Arel’s “Threat”

By William J. Rapaport


1. artificial general intelligence (AGI) “is inevitable ” (§1.1), 2. techniques including a “fusion between deep learning,... a scalable situation inference engine, and reinforcement learning [RL] as a decision-making system may hold the key to place us on the path to AGI ” (§2), and 3. “a potentially devastating conflict between a reward-driven AGI system and the human race... is inescapable, given the assumption that an RL-based AGI will be allowed to evolve ” (§2). Why “inescapable”? If I understand Arel correctly, it is a mathematical certainty: [F]rom equations (2) and (4) [Arel 2012, §§4.1, 6.1, the details of which are irrelevant to my argument], it follows that the agent continuously attempts to maximize its “positive ” surprises [i.e., “its wellbeing”]... while minimizing “negative ” surprises. This process... is unbounded.... [O]nce such a bonus is received on a regular basis, it becomes the new norm and no longer yields the same level of satisfaction. This is the core danger in designing systems that are driven by rewards and have large cognitiv

Year: 2014
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cse.buffalo.edu/~ra... (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.