492 research outputs found
Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness
This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able notice when something is amiss, assess the anomaly, and guide a solution into place. We call this basic strategy of self-guided learning the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. In this essay, we (a) argue that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) detail the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describe specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outline both short-term and long-term research agendas
Default reasoning and neural networks
In this dissertation a formalisation of nonmonotonic reasoning, namely Default logic, is discussed. A proof theory for default logic and a variant of Default logic - Prioritised Default logic - is presented. We also pursue an investigation into the relationship between default reasoning and making inferences in a neural network. The inference problem shifts from the logical problem in Default logic to the optimisation problem in neural networks, in which maximum consistency is aimed at The inference is realised as an adaptation process that identifies and resolves conflicts between existing knowledge about the relevant world and external information. Knowledge and
data are transformed into constraint equations and the nodes in the network represent propositions and constraint equations. The violation of constraints is formulated in terms of an energy function. The Hopfield network is shown to be suitable for modelling optimisation problems and default reasoning.Computer ScienceM.Sc. (Computer Science
Current and Future Challenges in Knowledge Representation and Reasoning
Knowledge Representation and Reasoning is a central, longstanding, and active
area of Artificial Intelligence. Over the years it has evolved significantly;
more recently it has been challenged and complemented by research in areas such
as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl
Perspectives workshop was held on Knowledge Representation and Reasoning. The
goal of the workshop was to describe the state of the art in the field,
including its relation with other areas, its shortcomings and strengths,
together with recommendations for future progress. We developed this manifesto
based on the presentations, panels, working groups, and discussions that took
place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge
Representation: its origins, goals, milestones, and current foci; its relation
to other disciplines, especially to Artificial Intelligence; and on its
challenges, along with key priorities for the next decade
Appraisal Using Generalized Additive Models
Many of the results from real estate empirical studies depend upon using a correct functional form for their validity. Unfortunately, common parametric statistical tools cannot easily control for the possibility of misspecification. Recently, semiparametric estimators such as generalized additive models (GAMs) have arisen which can automatically control for additive (in price) or multiplicative (in ln(price)) nonlinear relations among the independent and dependent variables. As the paper shows, GAMs can empirically outperform naive parametric and polynomial models in ex-sample predictive behavior. Moreover, GAMs have well-developed statistical properties and can suggest useful transformations in parametric settings.
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks
We derive a simple unified framework giving closed-form estimates for the
test risk and other generalization metrics of kernel ridge regression (KRR).
Relative to prior work, our derivations are greatly simplified and our final
expressions are more readily interpreted. These improvements are enabled by our
identification of a sharp conservation law which limits the ability of KRR to
learn any orthonormal basis of functions. Test risk and other objects of
interest are expressed transparently in terms of our conserved quantity
evaluated in the kernel eigenbasis. We use our improved framework to: i)
provide a theoretical explanation for the "deep bootstrap" of Nakkiran et al
(2020), ii) generalize a previous result regarding the hardness of the classic
parity problem, iii) fashion a theoretical tool for the study of adversarial
robustness, and iv) draw a tight analogy between KRR and a well-studied system
in statistical physics
Neural-Symbolic Argumentation Mining: An Argument in Favor of Deep Learning and Reasoning
Deep learning is bringing remarkable contributions to the field of argumentation mining, but the existing approaches still need to fill the gap toward performing advanced reasoning tasks. In this position paper, we posit that neural-symbolic and statistical relational learning could play a crucial role in the integration of symbolic and sub-symbolic methods to achieve this goal
- …