9,300 research outputs found

    Behavioural Economics: Classical and Modern

    Get PDF
    In this paper, the origins and development of behavioural economics, beginning with the pioneering works of Herbert Simon (1953) and Ward Edwards (1954), is traced, described and (critically) discussed, in some detail. Two kinds of behavioural economics – classical and modern – are attributed, respectively, to the two pioneers. The mathematical foundations of classical behavioural economics is identified, largely, to be in the theory of computation and computational complexity; the corresponding mathematical basis for modern behavioural economics is, on the other hand, claimed to be a notion of subjective probability (at least at its origins in the works of Ward Edwards). The economic theories of behavior, challenging various aspects of 'orthodox' theory, were decisively influenced by these two mathematical underpinnings of the two theoriesClassical Behavioural Economics, Modern Behavioural Economics, Subjective Probability, Model of Computation, Computational Complexity. Subjective Expected Utility

    Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness

    Get PDF
    This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able notice when something is amiss, assess the anomaly, and guide a solution into place. We call this basic strategy of self-guided learning the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. In this essay, we (a) argue that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) detail the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describe specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outline both short-term and long-term research agendas

    Normalized Information Distance

    Get PDF
    The normalized information distance is a universal distance measure for objects of all kinds. It is based on Kolmogorov complexity and thus uncomputable, but there are ways to utilize it. First, compression algorithms can be used to approximate the Kolmogorov complexity if the objects have a string representation. Second, for names and abstract concepts, page count statistics from the World Wide Web can be used. These practical realizations of the normalized information distance can then be applied to machine learning tasks, expecially clustering, to perform feature-free and parameter-free data mining. This chapter discusses the theoretical foundations of the normalized information distance and both practical realizations. It presents numerous examples of successful real-world applications based on these distance measures, ranging from bioinformatics to music clustering to machine translation.Comment: 33 pages, 12 figures, pdf, in: Normalized information distance, in: Information Theory and Statistical Learning, Eds. M. Dehmer, F. Emmert-Streib, Springer-Verlag, New-York, To appea

    Modelling decision tables from data.

    Get PDF
    On most datasets induction algorithms can generate very accurate classifiers. Sometimes, however, these classifiers are very hard to understand for humans. Therefore, in this paper it is investigated how we can present the extracted knowledge to the user by means of decision tables. Decision tables are very easy to understand. Furthermore, decision tables provide interesting facilities to check the extracted knowledge on consistency and completeness. In this paper, it is demonstrated how a consistent and complete DT can be modelled starting from raw data. The proposed method is empirically validated on several benchmarking datasets. It is shown that the modelling decision tables are sufficiently small. This allows easy consultation of the represented knowledge.Data;
    • …
    corecore