247,601 research outputs found

    On potential cognitive abilities in the machine kingdom

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11023-012-9299-6Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the ‘animal kingdom’ are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term ‘potential ability’ usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different for the ‘machine kingdom’. While machines can be characterised by a set of cognitive abilities, and measuring them is already a big challenge, known as ‘universal psychometrics’, a more informative, and yet more challenging, goal would be to also determine the potential cognitive abilities of a machine. In this paper we investigate the notion of potential cognitive ability for machines, focussing especially on universality and intelligence. We consider several machine characterisations (non-interactive and interactive) and give definitions for each case, considering permanent and temporal potentials. From these definitions, we analyse the relation between some potential abilities, we bring out the dependency on the environment distribution and we suggest some ideas about how potential abilities can be measured. Finally, we also analyse the potential of environments at different levels and briefly discuss whether machines should be designed to be intelligent or potentially intelligent.We thank the anonymous reviewers for their comments, which have helped to significantly improve this paper. This work was supported by the MEC-MINECO projects CONSOLIDER-INGENIO CSD2007-00022 and TIN 2010-21062-C02-02, GVA project PROMETEO/2008/051, the COST - European Cooperation in the field of Scientific and Technical Research IC0801 AT. Finally, we thank three pioneers ahead of their time(s). We thank Ray Solomonoff (1926-2009) and Chris Wallace (1933-2004) for all that they taught us, directly and indirectly. And, in his centenary year, we thank Alan Turing (1912-1954), with whom it perhaps all began.Hernández-Orallo, J.; Dowe, DL. (2013). On potential cognitive abilities in the machine kingdom. Minds and Machines. 23(2):179-210. https://doi.org/10.1007/s11023-012-9299-6S179210232Amari, S., Fujita, N., Shinomoto, S. (1992). Four types of learning curves. Neural Computation 4(4), 605–618.Aristotle (Translation, Introduction, and Commentary by Ross, W.D.) (1924). Aristotle’s Metaphysics. Oxford: Clarendon Press.Barmpalias, G. & Dowe, D. L. (2012). Universality probability of a prefix-free machine. Philosophical transactions of the Royal Society A [Mathematical, Physical and Engineering Sciences] (Phil Trans A), Theme Issue ‘The foundations of computation, physics and mentality: The Turing legacy’ compiled and edited by Barry Cooper and Samson Abramsky, 370, pp 3488–3511.Chaitin, G. J. (1966). On the length of programs for computing finite sequences. Journal of the Association for Computing Machinery, 13, 547–569.Chaitin, G. J. (1975). A theory of program size formally identical to information theory. Journal of the ACM (JACM), 22(3), 329–340.Dowe, D. L. (2008, September). Foreword re C. S. Wallace. Computer Journal, 51(5):523–560, Christopher Stewart WALLACE (1933–2004) memorial special issue.Dowe, D. L. (2011). MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness. In: P. S. Bandyopadhyay, M. R. Forster (Eds), Handbook of the philosophy of science—Volume 7: Philosophy of statistics (pp. 901–982). Amsterdam: Elsevier.Dowe, D. L. & Hajek, A. R. (1997a). A computational extension to the turing test. Technical report #97/322, Dept Computer Science, Monash University, Melbourne, Australia, 9 pp, http://www.csse.monash.edu.au/publications/1997/tr-cs97-322-abs.html .Dowe, D. L. & Hajek, A. R. (1997b, September). A computational extension to the Turing Test. in Proceedings of the 4th conference of the Australasian Cognitive Science Society, University of Newcastle, NSW, Australia, 9 pp.Dowe, D. L. & Hajek, A. R. (1998, February). A non-behavioural, computational extension to the Turing Test. In: International conference on computational intelligence and multimedia applications (ICCIMA’98), Gippsland, Australia, pp 101–106.Dowe, D. L., Hernández-Orallo, J. (2012). IQ tests are not for machines, yet. Intelligence, 40(2), 77–81.Gallistel, C. R., Fairhurst, S., & Balsam, P. (2004). The learning curve: Implications of a quantitative analysis. In Proceedings of the National Academy of Sciences of the United States of America, 101(36), 13124–13131.Gardner, M. (1970). Mathematical games: The fantastic combinations of John Conway’s new solitaire game “life”. Scientific American, 223(4), 120–123.Goertzel, B. & Bugaj, S. V. (2009). AGI preschool: A framework for evaluating early-stage human-like AGIs. In Proceedings of the second international conference on artificial general intelligence (AGI-09), pp 31–36.Hernández-Orallo, J. (2000a). Beyond the Turing Test. Journal of Logic, Language & Information, 9(4), 447–466.Hernández-Orallo, J. (2000b). On the computational measurement of intelligence factors. In A. Meystel (Ed), Performance metrics for intelligent systems workshop (pp 1–8). Gaithersburg, MD: National Institute of Standards and Technology.Hernández-Orallo, J. (2010). On evaluating agent performance in a fixed period of time. In M. Hutter et al. (Eds.), Proceedings of 3rd international conference on artificial general intelligence (pp. 25–30). New York: Atlantis Press.Hernández-Orallo, J., & Dowe, D. L. (2010). Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence, 174(18), 1508–1539.Hernández-Orallo, J. & Dowe, D. L. (2011, April). Mammals, machines and mind games. Who’s the smartest?. The conversation, http://theconversation.edu.au/mammals-machines-and-mind-games-whos-the-smartest-566 .Hernández-Orallo J., Dowe D. L., España-Cubillo S., Hernández-Lloreda M. V., & Insa-Cabrera J. (2011). On more realistic environment distributions for defining, evaluating and developing intelligence. In: J. Schmidhuber, K. R. Thórisson, & M. Looks (Eds.), Artificial general intelligence 2011, volume 6830, LNAI series, pp. 82–91. New York: Springer.Hernández-Orallo, J., Dowe, D. L., & Hernández-Lloreda, M. V. (2012a, March). Measuring cognitive abilities of machines, humans and non-human animals in a unified way: towards universal psychometrics. Technical report 2012/267, Faculty of Information Technology, Clayton School of I.T., Monash University, Australia.Hernández-Orallo, J., Insa, J., Dowe, D. L., & Hibbard, B. (2012b). Turing tests with Turing machines. In A. Voronkov (Ed.), The Alan Turing centenary conference, Turing-100, Manchester, volume 10 of EPiC Series, pp 140–156.Hernández-Orallo, J., & Minaya-Collado, N. (1998). A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In Proceedings of the international symposium of engineering of intelligent systems (EIS’98) (pp 146–163). Switzerland: ICSC Press.Herrmann, E., Call, J., Hernández-Lloreda, M. V., Hare, B., & Tomasello, M. (2007). Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science, 317(5843), 1360–1366.Herrmann, E., Hernández-Lloreda, M. V., Call, J., Hare, B., & Tomasello, M. (2010). The structure of individual differences in the cognitive abilities of children and chimpanzees. Psychological Science, 21(1), 102–110.Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of educational psychology, 57(5), 253.Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. New York: Springer.Insa-Cabrera, J., Dowe, D. L., España, S., Hernández-Lloreda, M. V., & Hernández-Orallo, J. (2011a). Comparing humans and AI agents. In AGI: 4th conference on artificial general intelligence—Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pp 122–132. Springer, New York.Insa-Cabrera, J., Dowe, D. L., & Hernández-Orallo, J. (2011b). Evaluating a reinforcement learning algorithm with a general intelligence test. In CAEPIA—Lecture Notes in Artificial Intelligence (LNAI), volume 7023, pages 1–11. Springer, New York.Kearns, M. & Singh, S. (2002). Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2), 209–232.Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems of Information Transmission, 1, 4–7.Legg, S. (2008, June). Machine super intelligence. Department of Informatics, University of Lugano.Legg, S. & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.Legg, S., & Veness, J. (2012). An approximation of the universal intelligence measure. In Proceedings of Solomonoff 85th memorial conference. New York: Springer.Levin, L. A. (1973). Universal sequential search problems. Problems of Information Transmission, 9(3), 265–266.Li, M., Vitányi, P. (2008). An introduction to Kolmogorov complexity and its applications (3rd ed). New York: Springer.Little, V. L., & Bailey, K. G. (1972). Potential intelligence or intelligence test potential? A question of empirical validity. Journal of Consulting and Clinical Psychology, 39(1), 168.Mahoney, M. V. (1999). Text compression as a test for artificial intelligence. In Proceedings of the national conference on artificial intelligence, AAAI (pp. 486–502). New Jersey: Wiley.Mahrer, A. R. (1958). Potential intelligence: A learning theory approach to description and clinical implication. The Journal of General Psychology, 59(1), 59–71.Oppy, G., & Dowe, D. L. (2011). The Turing Test. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy. Stanford University. http://plato.stanford.edu/entries/turing-test/ .Orseau, L. & Ring, M. (2011). Self-modification and mortality in artificial agents. In AGI: 4th conference on artificial general intelligence—Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pages 1–10. Springer, New York.Ring, M. & Orseau, L. (2011). Delusion, survival, and intelligent agents. In AGI: 4th conference on artificial general intelligence—Lecture Notes in Artificial Intelligence (LNAI), volume 6830, pp. 11–20. Springer, New York.Schaeffer, J., Burch, N., Bjornsson, Y., Kishimoto, A., Muller, M., Lake, R., et al. (2007). Checkers is solved. Science, 317(5844), 1518.Solomonoff, R. J. (1962). Training sequences for mechanized induction. In M. Yovits, G. Jacobi, & G. Goldsteins (Eds.), Self-Organizing Systems, 7, 425–434.Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7(1–22), 224–254.Solomonoff, R. J. (1967). Inductive inference research: Status, Spring 1967. RTB 154, Rockford Research, Inc., 140 1/2 Mt. Auburn St., Cambridge, Mass. 02138, July 1967.Solomonoff, R. J. (1978). Complexity-based induction systems: comparisons and convergence theorems. IEEE Transactions on Information Theory, 24(4), 422–432.Solomonoff, R. J. (1984). Perfect training sequences and the costs of corruption—A progress report on induction inference research. Oxbridge research.Solomonoff, R. J. (1985). The time scale of artificial intelligence: Reflections on social effects. Human Systems Management, 5, 149–153.Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: The MIT press.Thorp, T. R., & Mahrer, A. R. (1959). Predicting potential intelligence. Journal of Clinical Psychology, 15(3), 286–288.Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.Veness, J., Ng, K. S., Hutter, M., & Silver, D. (2011). A Monte Carlo AIXI approximation. Journal of Artificial Intelligence Research, JAIR, 40, 95–142.Wallace, C. S. (2005). Statistical and inductive inference by minimum message length. New York: Springer.Wallace, C. S., & Boulton, D. M. (1968). An information measure for classification. Computer Journal, 11, 185–194.Wallace, C. S., & Dowe, D. L. (1999a). Minimum message length and Kolmogorov complexity. Computer Journal 42(4), 270–283.Wallace, C. S., & Dowe, D. L. (1999b). Refinements of MDL and MML coding. Computer Journal, 42(4), 330–337.Woergoetter, F., & Porr, B. (2008). Reinforcement learning. Scholarpedia, 3(3), 1448.Zvonkin, A. K., & Levin, L. A. (1970). The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys, 25, 83–124

    Allocation in Practice

    Full text link
    How do we allocate scarcere sources? How do we fairly allocate costs? These are two pressing challenges facing society today. I discuss two recent projects at NICTA concerning resource and cost allocation. In the first, we have been working with FoodBank Local, a social startup working in collaboration with food bank charities around the world to optimise the logistics of collecting and distributing donated food. Before we can distribute this food, we must decide how to allocate it to different charities and food kitchens. This gives rise to a fair division problem with several new dimensions, rarely considered in the literature. In the second, we have been looking at cost allocation within the distribution network of a large multinational company. This also has several new dimensions rarely considered in the literature.Comment: To appear in Proc. of 37th edition of the German Conference on Artificial Intelligence (KI 2014), Springer LNC

    Evaluation of second-level inference in fMRI analysis

    Get PDF
    We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference

    Parameterized Algorithmics for Computational Social Choice: Nine Research Challenges

    Full text link
    Computational Social Choice is an interdisciplinary research area involving Economics, Political Science, and Social Science on the one side, and Mathematics and Computer Science (including Artificial Intelligence and Multiagent Systems) on the other side. Typical computational problems studied in this field include the vulnerability of voting procedures against attacks, or preference aggregation in multi-agent systems. Parameterized Algorithmics is a subfield of Theoretical Computer Science seeking to exploit meaningful problem-specific parameters in order to identify tractable special cases of in general computationally hard problems. In this paper, we propose nine of our favorite research challenges concerning the parameterized complexity of problems appearing in this context

    Gender and Age Related Effects While Watching TV Advertisements: An EEG Study

    Get PDF
    The aim of the present paper is to show how the variation of the EEG frontal cortical asymmetry is related to the general appreciation perceived during the observation of TV advertisements, in particular considering the influence of the gender and age on it. In particular, we investigated the influence of the gender on the perception of a car advertisement (Experiment 1) and the influence of the factor age on a chewing gum commercial (Experiment 2). Experiment 1 results showed statistically significant higher approach values for the men group throughout the commercial. Results from Experiment 2 showed significant lower values by older adults for the spot, containing scenes not very enjoyed by them. In both studies, there was no statistical significant difference in the scene relative to the product offering between the experimental populations, suggesting the absence in our study of a bias towards the specific product in the evaluated populations. These evidences state the importance of the creativity in advertising, in order to attract the target population

    EEG Resting-State Brain Topological Reorganization as a Function of Age

    Get PDF
    Resting state connectivity has been increasingly studied to investigate the effects of aging on the brain. A reduced organization in the communication between brain areas was demonstrated b y combining a variety of different imaging technologies (fMRI, EEG, and MEG) and graph theory. In this paper, we propose a methodology to get new insights into resting state connectivity and its variations with age, by combining advanced techniques of effective connectivity estimation, graph theoretical approach, and classification by SVM method. We analyzed high density EEG signal srecordedatrestfrom71healthysubjects(age:20–63years). Weighted and directed connectivity was computed by means of Partial Directed Coherence based on a General Linear Kalman filter approach. To keep the information collected by the estimator, weighted and directed graph indices were extracted from the resulting networks. A relation between brain network properties and age of the subject was found, indicating a tendency of the network to randomly organize increasing with age. This result is also confirmed dividing the whole population into two subgroups according to the age (young and middle-aged adults): significant differences exist in terms of network organization measures. Classification of the subjects by means of such indices returns an accuracy greater than 80
    corecore