15 research outputs found

    On Cognitive Preferences and the Plausibility of Rule-based Models

    Get PDF
    It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly speaking, we equate the plausibility of a model with the likeliness that a user accepts it as an explanation for a prediction. In particular, we argue that, all other things being equal, longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models, which is typically necessary for learning powerful discriminative models, may not be suitable when it comes to user acceptance of the learned models. To that end, we first recapitulate evidence for and against this postulate, and then report the results of an evaluation in a crowd-sourcing study based on about 3.000 judgments. The results do not reveal a strong preference for simple rules, whereas we can observe a weak preference for longer rules in some domains. We then relate these results to well-known cognitive biases such as the conjunction fallacy, the representative heuristic, or the recogition heuristic, and investigate their relation to rule length and plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus on plausibility and relation to interpretability, comprehensibility, and justifiabilit

    A solution to the hyper complex, cross domain reality of artificial intelligence: The hierarchy of AI

    Get PDF
    Artificial Intelligence (AI) is an umbrella term used to describe machine-based forms of learning. This can encapsulate anything from Siri, Apple's smartphone-based assistant, to Tesla's autonomous vehicles (self-driving cars). At present, there are no set criteria to classify AI. The implications of which include public uncertainty, corporate scepticism, diminished confidence, insufficient funding and limited progress. Current substantial challenges exist with AI such as the use of combinationally large search space, prediction errors against ground truth values, the use of quantum error correction strategies. These are discussed in addition to fundamental data issues across collection, sample error and quality. The concept of cross realms and domains used to inform AI, is considered. Furthermore there is the issue of the confusing range of current AI labels. This paper aims to provide a more consistent form of classification, to be used by institutions and organisations alike, as they endeavour to make AI part of their practice. In turn, this seeks to promote transparency and increase trust. This has been done through primary research, including a panel of data scientists / experts in the field, and through a literature review on existing research. The authors propose a model solution in that of the Hierarchy of AI

    A computational analysis of general intelligence tests for evaluating cognitive development

    Full text link
    [EN] The progression in several cognitive tests for the same subjects at different ages provides valuable information about their cognitive development. One question that has caught recent interest is whether the same approach can be used to assess the cognitive development of artificial systems. In particular, can we assess whether the fluid or crystallised intelligence of an artificial cognitive system is changing during its cognitive development as a result of acquiring more concepts? In this paper, we address several IQ tests problems (odd-one-out problems, Raven s Progressive Matrices and Thurstone s letter series) with a general learning system that is not particularly designed on purpose to solve intelligence tests. The goal is to better understand the role of the basic cognitive perational constructs (such as identity, difference, order, counting, logic, etc.) that are needed to solve these intelligence test problems and serve as a proof-of-concept for evaluation in other developmental problems. From here, we gain some insights into the characteristics and usefulness of these tests and how careful we need to be when applying human test problems to assess the abilities and cognitive development of robots and other artificial cognitive systems.This work has been partially supported by the EU (FEDER) and the Spanish MINECO under grants TIN 2015-69175-C4-1-R and TIN 2013-45732-C4-1-P, and by Generalitat Valenciana under grant PROMETEOII/2015/013.Martínez-Plumed, F.; Ferri Ramírez, C.; Hernández-Orallo, J.; Ramírez Quintana, MJ. (2017). A computational analysis of general intelligence tests for evaluating cognitive development. Cognitive Systems Research. 43:100-118. https://doi.org/10.1016/j.cogsys.2017.01.006S1001184
    corecore