5 research outputs found
Impossibility Results in AI: A Survey
An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability. We found that certain theorems are too specific or have implicit assumptions that limit application. Also, we added a new result (theorem) about the unfairness of explainability, the first explainability-related result in the induction category. We concluded that deductive impossibilities deny 100%-guarantees for security. In the end, we give some ideas that hold potential in explainability, controllability, value alignment, ethics, and group decision-making. They can be deepened by further investigation
Stochastic Mathematical Systems
We introduce a framework that can be used to model both mathematics and human
reasoning about mathematics. This framework involves {stochastic mathematical
systems} (SMSs), which are stochastic processes that generate pairs of
questions and associated answers (with no explicit referents). We use the SMS
framework to define normative conditions for mathematical reasoning, by
defining a ``calibration'' relation between a pair of SMSs. The first SMS is
the human reasoner, and the second is an ``oracle'' SMS that can be interpreted
as deciding whether the question-answer pairs of the reasoner SMS are valid. To
ground thinking, we understand the answers to questions given by this oracle to
be the answers that would be given by an SMS representing the entire
mathematical community in the infinite long run of the process of asking and
answering questions. We then introduce a slight extension of SMSs to allow us
to model both the physical universe and human reasoning about the physical
universe. We then define a slightly different calibration relation appropriate
for the case of scientific reasoning. In this case the first SMS represents a
human scientist predicting the outcome of future experiments, while the second
SMS represents the physical universe in which the scientist is embedded, with
the question-answer pairs of that SMS being specifications of the experiments
that will occur and the outcome of those experiments, respectively. Next we
derive conditions justifying two important patterns of inference in both
mathematical and scientific reasoning: i) the practice of increasing one's
degree of belief in a claim as one observes increasingly many lines of evidence
for that claim, and ii) abduction, the practice of inferring a claim's
probability of being correct from its explanatory power with respect to some
other claim that is already taken to hold for independent reasons.Comment: 43 pages of text, 6 pages of references, 11 pages of appendice
On Controllability of Artificial Intelligence
Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possibility of controlling artificial general intelligence and its more advanced version, superintelligence, has not been formally established. In this paper, we present arguments as well as supporting evidence from multiple domains indicating that advanced AI can’t be fully controlled. Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security. This paper can serve as a comprehensive reference for the topic of uncontrollability