53 research outputs found

    Comparing humans and AI agents

    Full text link
    Comparing humans and machines is one important source of information about both machine and human strengths and limitations. Most of these comparisons and competitions are performed in rather specific tasks such as calculus, speech recognition, translation, games, etc. The information conveyed by these experiments is limited, since it portrays that machines are much better than humans at some domains and worse at others. In fact, CAPTCHAs exploit this fact. However, there have only been a few proposals of general intelligence tests in the last two decades, and, to our knowledge, just a couple of implementations and evaluations. In this paper, we implement one of the most recent test proposals, devise an interface for humans and use it to compare the intelligence of humans and Q-learning, a popular reinforcement learning algorithm. The results are highly informative in many ways, raising many questions on the use of a (universal) distribution of environments, on the role of measuring knowledge acquisition, and other issues, such as speed, duration of the test, scalability, etc.We thank the anonymous reviewers for their helpful comments. We also thank José Antonio Martín H. for helping us with several issues about the RL competition, RL-Glue and reinforcement learning in general. We are also grateful to all the subjects who took the test. We also thank the funding from the Spanish MEC and MICINN for projects TIN2009-06078- E/TIN, Consolider-Ingenio CSD2007-00022 and TIN2010-21062-C02, for MEC FPU grant AP2006-02323, and Generalitat Valenciana for Prometeo/2008/051Insa Cabrera, J.; Dowe, DL.; España Cubillo, S.; Henánez-Lloreda, MV.; Hernández Orallo, J. (2011). Comparing humans and AI agents. En Artificial General Intelligence. Springer Verlag (Germany). 6830:122-132. https://doi.org/10.1007/978-3-642-22887-2_13S1221326830Dowe, D.L., Hajek, A.R.: A non-behavioural, computational extension to the Turing Test. In: Intl. Conf. on Computational Intelligence & multimedia applications (ICCIMA 1998), Gippsland, Australia, pp. 101–106 (1998)Gordon, D., Subramanian, D.: A cognitive model of learning to navigate. In: Proc. 19th Conf. of the Cognitive Science Society, 1997, vol. 25, p. 271. Lawrence Erlbaum, Mahwah (1997)Hernández-Orallo, J.: Beyond the Turing Test. J. Logic, Language & Information 9(4), 447–466 (2000)Hernández-Orallo, J.: A (hopefully) non-biased universal environment class for measuring intelligence of biological and artificial systems. In: Hutter, M., et al. (eds.) 3rd Intl. Conf. on Artificial General Intelligence, pp. 182–183. Atlantis Press, London (2010) Extended report at, http://users.dsic.upv.es/proy/anynt/unbiased.pdfHernández-Orallo, J., Dowe, D.L.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence 174(18), 1508–1539 (2010)Hernández-Orallo, J., Dowe, D.L., España-Cubillo, S., Hernández-Lloreda, M.V., Insa-Cabrera, J.: On more realistic environment distributions for defining, evaluating and developing intelligence. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS(LNAI), pp. 81–90. Springer, Heidelberg (2011)Legg, S., Hutter, M.: A universal measure of intelligence for artificial agents. In: Intl Joint Conf on Artificial Intelligence, IJCAI, vol. 19, p. 1509 (2005)Legg, S., Hutter, M.: Universal intelligence: A definition of machine intelligence. Minds and Machines 17(4), 391–444 (2007)Li, M., Vitányi, P.: An introduction to Kolmogorov complexity and its applications, 3rd edn. Springer-Verlag New York, Inc., Heidelberg (2008)Oppy, G., Dowe, D.L.: The Turing Test. In: Zalta, E.N. (ed.) Stanford Encyclopedia of Philosophy, Stanford University, Stanford (2011), http://plato.stanford.edu/entries/turing-test/Sanghi, P., Dowe, D.L.: A computer program capable of passing IQ tests. In: 4th Intl. Conf. on Cognitive Science (ICCS 2003), Sydney, pp. 570–575 (2003)Solomonoff, R.J.: A formal theory of inductive inference. Part I. Information and control 7(1), 1–22 (1964)Strehl, A.L., Li, L., Wiewiora, E., Langford, J., Littman, M.L.: PAC model-free reinforcement learning. In: ICML 2006, pp. 881–888. New York (2006)Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. The MIT press, Cambridge (1998)Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)Veness, J., Ng, K.S., Hutter, M., Silver, D.: A Monte Carlo AIXI Approximation. Journal of Artificial Intelligence Research, JAIR 40, 95–142 (2011)von Ahn, L., Blum, M., Langford, J.: Telling humans and computers apart automatically. Communications of the ACM 47(2), 56–60 (2004)Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. learning 8(3), 279–292 (1992

    Compression and intelligence: social environments and communication

    Full text link
    Compression has been advocated as one of the principles which pervades inductive inference and prediction - and, from there, it has also been recurrent in definitions and tests of intelligence. However, this connection is less explicit in new approaches to intelligence. In this paper, we advocate that the notion of compression can appear again in definitions and tests of intelligence through the concepts of `mind-reading¿ and `communication¿ in the context of multi-agent systems and social environments. Our main position is that two-part Minimum Message Length (MML) compression is not only more natural and effective for agents with limited resources, but it is also much more appropriate for agents in (co-operative) social environments than one-part compression schemes - particularly those using a posterior-weighted mixture of all available models following Solomonoff¿s theory of prediction. We think that the realisation of these differences is important to avoid a naive view of `intelligence as compression¿ in favour of a better understanding of how, why and where (one-part or two-part, lossless or lossy) compression is needed.We thank the anonymous reviewers for their helpful comments, and we thank Kurt Kleiner for some challenging and ultimately very helpful questions in the broad area of this work. We also acknowledge the funding from the Spanish MEC and MICINN for projects TIN2009-06078-E/TIN, Consolider-Ingenio CSD2007-00022 and TIN2010-21062-C02, and Generalitat Valenciana for Prometeo/2008/051.Dowe, DL.; Hernández Orallo, J.; Das, PK. (2011). Compression and intelligence: social environments and communication. En Artificial General Intelligence. Springer Verlag (Germany). 6830:204-211. https://doi.org/10.1007/978-3-642-22887-2_21S2042116830Chaitin, G.J.: Godel’s theorem and information. International Journal of Theoretical Physics 21(12), 941–954 (1982)Dowe, D.L.: Foreword re C. S. Wallace. Computer Journal 51(5), 523–560 (2008); Christopher Stewart WALLACE (1933-2004) memorial special issueDowe, D.L.: Minimum Message Length and statistically consistent invariant (objective?) Bayesian probabilistic inference - from (medical) “evidence”. Social Epistemology 22(4), 433–460 (2008)Dowe, D.L.: MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness. In: Bandyopadhyay, P.S., Forster, M.R. (eds.) Handbook of the Philosophy of Science. Philosophy of Statistics, vol. 7, pp. 901–982. Elsevier, Amsterdam (2011)Dowe, D.L., Hajek, A.R.: A computational extension to the Turing Test. Technical Report #97/322, Dept Computer Science, Monash University, Melbourne, Australia, 9 pp (1997)Dowe, D.L., Hajek, A.R.: A non-behavioural, computational extension to the Turing Test. In: Intl. Conf. on Computational Intelligence & multimedia applications (ICCIMA 1998), Gippsland, Australia, pp. 101–106 (February 1998)Hernández-Orallo, J.: Beyond the Turing Test. J. Logic, Language & Information 9(4), 447–466 (2000)Hernández-Orallo, J.: Constructive reinforcement learning. International Journal of Intelligent Systems 15(3), 241–264 (2000)Hernández-Orallo, J.: On the computational measurement of intelligence factors. In: Meystel, A. (ed.) Performance metrics for intelligent systems workshop, pp. 1–8. National Institute of Standards and Technology, Gaithersburg, MD, U.S.A (2000)Hernández-Orallo, J., Dowe, D.L.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence 174(18), 1508–1539 (2010)Hernández-Orallo, J., Minaya-Collado, N.: A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In: Proc. Intl Symposium of Engineering of Intelligent Systems (EIS 1998), pp. 146–163. ICSC Press (1998)Legg, S., Hutter, M.: Universal intelligence: A definition of machine intelligence. Minds and Machines 17(4), 391–444 (2007)Lewis, D.K., Shelby-Richardson, J.: Scriven on human unpredictability. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 17(5), 69–74 (1966)Oppy, G., Dowe, D.L.: The Turing Test. In: Zalta, E.N. (ed.) Stanford Encyclopedia of Philosophy, Stanford University, Stanford (2011), http://plato.stanford.edu/entries/turing-test/Salomon, D., Motta, G., Bryant, D.C.O.N.: Handbook of data compression. Springer-Verlag New York Inc., Heidelberg (2009)Sanghi, P., Dowe, D.L.: A computer program capable of passing I.Q. tests. In: 4th International Conference on Cognitive Science (and 7th Australasian Society for Cognitive Science Conference), vol. 2, pp. 570–575. Univ. of NSW, Sydney, Australia (July 2003)Sayood, K.: Introduction to data compression. Morgan Kaufmann, San Francisco (2006)Scriven, M.: An essential unpredictability in human behavior. In: Wolman, B.B., Nagel, E. (eds.) Scientific Psychology: Principles and Approaches, pp. 411–425. Basic Books (Perseus Books), New York (1965)Searle, J.R.: Minds, brains and programs. Behavioural and Brain Sciences 3, 417–457 (1980)Solomonoff, R.J.: A formal theory of inductive inference. Part I. Information and control 7(1), 1–22 (1964)Sutton, R.S.: Generalization in reinforcement learning: Successful examples using sparse coarse coding. Advances in neural information processing systems, 1038–1044 (1996)Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. The MIT Press, Cambridge (1998)Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)Veness, J., Ng, K.S., Hutter, M., Silver, D.: A Monte Carlo AIXI Approximation. Journal of Artificial Intelligence Research, JAIR 40, 95–142 (2011)Wallace, C.S.: Statistical and Inductive Inference by Minimum Message Length. Springer, Heidelberg (2005)Wallace, C.S., Boulton, D.M.: An information measure for classification. Computer Journal 11(2), 185–194 (1968)Wallace, C.S., Dowe, D.L.: Intrinsic classification by MML - the Snob program. In: Proc. 7th Australian Joint Conf. on Artificial Intelligence, pp. 37–44. World Scientific, Singapore (November 1994)Wallace, C.S., Dowe, D.L.: Minimum message length and Kolmogorov complexity. Computer Journal 42(4), 270–283 (1999); Special issue on Kolmogorov complexityWallace, C.S., Dowe, D.L.: MML clustering of multi-state, Poisson, von Mises circular and Gaussian distributions. Statistics and Computing 10, 73–83 (2000

    Unity between God and mind? A study on the relationship between panpsychism and pantheism

    Get PDF
    The author thanks the John Templeton Foundation and The Pantheism and Panentheism Project for sponsoring this paperA number of contemporary philosophers have suggested that the recent revival of interest in panpsychism within philosophy of mind could reinvigorate a pantheistic philosophy of religion. This project explores whether the combination and individuation problems, which have dominated recent scholarship within panpsychism, can aid the pantheist’s articulation a God/Universe Unity. Constitutive holistic panpsychism is seen to be the only type of panpsychism suited to aid pantheism in articulating this type of unity. There are currently no well-developed solutions to the individuation problem for this type of panpsychism. Moreover, the gestures towards a solution appear costly to the religious significance of pantheism. This article concludes that any hope that the contemporary panpsychism might aid pantheists in articulating Unity is premature and possibly misplaced.Publisher PDFPeer reviewe

    The Problem of Religious Evil: Does Belief in God Cause Evil?

    Get PDF
    Daniel Kodaj has recently developed a pro-atheistic argument that he calls "the problem of religious evil." This first premise of this argument is "belief in God causes evil." Although this idea that belief in God causes evil is widely accepted, certainly in the secular West, it is sufficiently problematic as to be unsuitable as a basis for an argument for atheism, as Kodaj seeks to use it. In this paper I shall highlight the problems inherent in it in three ways: by considering whether it is reasonable to say that "belief in God" causes evil; whether it is reasonable to say that belief in God "causes" evil; and whether it is reasonable to say that belief in God causes "evil." In each case I will argue that it is problematic to make such claims, and accordingly I will conclude that the premise "belief in God causes evil" is unacceptable as it stands, and consequently is unable to ground Kodaj's pro-atheistic argument

    Individual differences in the time course of inferential processing.

    No full text

    Hume and the argument for biological design

    No full text
    corecore