240 research outputs found
The ontology of causal process theories
There is a widespread belief that the so-called process theories of causation developed by Wesley Salmon and Phil Dowe have given us an original account of what causation really is. In this paper, I show that this is a misconception. The notion of "causal process" does not offer us a new ontological account of causation. I make this argument by explicating the implicit ontological commitments in Salmon and Dowe's theories. From this, it is clear that Salmon's Mark Transmission Theory collapses to a counterfactual theory of causation, while the Conserved Quantity Theory collapses to David Fair's phsyicalist reduction of causation
The Fundamental Nature of the Log Loss Function
The standard loss functions used in the literature on probabilistic
prediction are the log loss function, the Brier loss function, and the
spherical loss function; however, any computable proper loss function can be
used for comparison of prediction algorithms. This note shows that the log loss
function is most selective in that any prediction algorithm that is optimal for
a given data sequence (in the sense of the algorithmic theory of randomness)
under the log loss function will be optimal under any computable proper mixable
loss function; on the other hand, there is a data sequence and a prediction
algorithm that is optimal for that sequence under either of the two other
standard loss functions but not under the log loss function.Comment: 12 page
Common Causes and The Direction of Causation
Is the common cause principle merely one of a set of useful heuristics for discovering causal relations, or is it rather a piece of heavy duty metaphysics, capable of grounding the direction of causation itself? Since the principle was introduced in Reichenbachâs groundbreaking work The Direction of Time (1956), there have been a series of attempts to pursue the latter programâto take the probabilistic relationships constitutive of the principle of the common cause and use them to ground the direction of causation. These attempts have not all explicitly appealed to the principle as originally formulated; it has also appeared in the guise of independence conditions, counterfactual overdetermination, and, in the causal modelling literature, as the causal markov condition. In this paper, I identify a set of difficulties for grounding the asymmetry of causation on the principle and its descendents. The first difficulty, concerning what I call the vertical placement of causation, consists of a tension between considerations that drive towards the macroscopic scale, and considerations that drive towards the microscopic scaleâthe worry is that these considerations cannot both be comfortably accommodated. The second difficulty consists of a novel potential counterexample to the principle based on the familiar Einstein Podolsky Rosen (EPR) cases in quantum mechanics
Compression and intelligence: social environments and communication
Compression has been advocated as one of the principles which pervades inductive inference and prediction - and, from there, it has also been recurrent in definitions and tests of intelligence. However, this connection is less explicit in new approaches to intelligence. In this paper, we advocate that the notion of compression can appear again in definitions and tests of intelligence through the concepts of `mind-readingÂż and `communicationÂż in the context of multi-agent systems and social environments. Our main position is that two-part Minimum Message Length (MML) compression is not only more natural and effective for agents with limited resources, but it is also much more appropriate for agents in (co-operative) social environments than one-part compression schemes - particularly those using a posterior-weighted mixture of all available models following SolomonoffÂżs theory of prediction. We think that the realisation of these differences is important to avoid a naive view of `intelligence as compressionÂż in favour of a better understanding of how, why and where (one-part or two-part, lossless or lossy) compression is needed.We thank the anonymous reviewers for their helpful comments, and we thank Kurt Kleiner for some challenging and ultimately very
helpful questions in the broad area of this work. We also acknowledge the funding from the Spanish MEC and MICINN for projects TIN2009-06078-E/TIN,
Consolider-Ingenio CSD2007-00022 and TIN2010-21062-C02, and Generalitat
Valenciana for Prometeo/2008/051.Dowe, DL.; HernĂĄndez Orallo, J.; Das, PK. (2011). Compression and intelligence: social environments and communication. En Artificial General Intelligence. Springer Verlag (Germany). 6830:204-211. https://doi.org/10.1007/978-3-642-22887-2_21S2042116830Chaitin, G.J.: Godelâs theorem and information. International Journal of Theoretical Physics 21(12), 941â954 (1982)Dowe, D.L.: Foreword re C. S. Wallace. Computer Journal 51(5), 523â560 (2008); Christopher Stewart WALLACE (1933-2004) memorial special issueDowe, D.L.: Minimum Message Length and statistically consistent invariant (objective?) Bayesian probabilistic inference - from (medical) âevidenceâ. Social Epistemology 22(4), 433â460 (2008)Dowe, D.L.: MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness. In: Bandyopadhyay, P.S., Forster, M.R. (eds.) Handbook of the Philosophy of Science. Philosophy of Statistics, vol. 7, pp. 901â982. Elsevier, Amsterdam (2011)Dowe, D.L., Hajek, A.R.: A computational extension to the Turing Test. Technical Report #97/322, Dept Computer Science, Monash University, Melbourne, Australia, 9 pp (1997)Dowe, D.L., Hajek, A.R.: A non-behavioural, computational extension to the Turing Test. In: Intl. Conf. on Computational Intelligence & multimedia applications (ICCIMA 1998), Gippsland, Australia, pp. 101â106 (February 1998)HernĂĄndez-Orallo, J.: Beyond the Turing Test. J. Logic, Language & Information 9(4), 447â466 (2000)HernĂĄndez-Orallo, J.: Constructive reinforcement learning. International Journal of Intelligent Systems 15(3), 241â264 (2000)HernĂĄndez-Orallo, J.: On the computational measurement of intelligence factors. In: Meystel, A. (ed.) Performance metrics for intelligent systems workshop, pp. 1â8. National Institute of Standards and Technology, Gaithersburg, MD, U.S.A (2000)HernĂĄndez-Orallo, J., Dowe, D.L.: Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence 174(18), 1508â1539 (2010)HernĂĄndez-Orallo, J., Minaya-Collado, N.: A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In: Proc. Intl Symposium of Engineering of Intelligent Systems (EIS 1998), pp. 146â163. ICSC Press (1998)Legg, S., Hutter, M.: Universal intelligence: A definition of machine intelligence. Minds and Machines 17(4), 391â444 (2007)Lewis, D.K., Shelby-Richardson, J.: Scriven on human unpredictability. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 17(5), 69â74 (1966)Oppy, G., Dowe, D.L.: The Turing Test. In: Zalta, E.N. (ed.) Stanford Encyclopedia of Philosophy, Stanford University, Stanford (2011), http://plato.stanford.edu/entries/turing-test/Salomon, D., Motta, G., Bryant, D.C.O.N.: Handbook of data compression. Springer-Verlag New York Inc., Heidelberg (2009)Sanghi, P., Dowe, D.L.: A computer program capable of passing I.Q. tests. In: 4th International Conference on Cognitive Science (and 7th Australasian Society for Cognitive Science Conference), vol. 2, pp. 570â575. Univ. of NSW, Sydney, Australia (July 2003)Sayood, K.: Introduction to data compression. Morgan Kaufmann, San Francisco (2006)Scriven, M.: An essential unpredictability in human behavior. In: Wolman, B.B., Nagel, E. (eds.) Scientific Psychology: Principles and Approaches, pp. 411â425. Basic Books (Perseus Books), New York (1965)Searle, J.R.: Minds, brains and programs. Behavioural and Brain Sciences 3, 417â457 (1980)Solomonoff, R.J.: A formal theory of inductive inference. Part I. Information and control 7(1), 1â22 (1964)Sutton, R.S.: Generalization in reinforcement learning: Successful examples using sparse coarse coding. Advances in neural information processing systems, 1038â1044 (1996)Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. The MIT Press, Cambridge (1998)Turing, A.M.: Computing machinery and intelligence. Mind 59, 433â460 (1950)Veness, J., Ng, K.S., Hutter, M., Silver, D.: A Monte Carlo AIXI Approximation. Journal of Artificial Intelligence Research, JAIR 40, 95â142 (2011)Wallace, C.S.: Statistical and Inductive Inference by Minimum Message Length. Springer, Heidelberg (2005)Wallace, C.S., Boulton, D.M.: An information measure for classification. Computer Journal 11(2), 185â194 (1968)Wallace, C.S., Dowe, D.L.: Intrinsic classification by MML - the Snob program. In: Proc. 7th Australian Joint Conf. on Artificial Intelligence, pp. 37â44. World Scientific, Singapore (November 1994)Wallace, C.S., Dowe, D.L.: Minimum message length and Kolmogorov complexity. Computer Journal 42(4), 270â283 (1999); Special issue on Kolmogorov complexityWallace, C.S., Dowe, D.L.: MML clustering of multi-state, Poisson, von Mises circular and Gaussian distributions. Statistics and Computing 10, 73â83 (2000
Recommended from our members
Can Machine Intelligence be Measured in the Same Way as Human intelligence?
In recent years the number of research projects on computer programs solving human intelligence problems in artificial intelligence (AI), artificial general intelligence, as well as in Cognitive Modelling, has significantly grown. One reason could be the interest of such problems as benchmarks for AI algorithms. Another, more fundamental, motivation behind this area of research might be the (implicit) assumption that a computer program that successfully can solve human intelligence problems has human-level intelligence and vice versa. This paper analyses this assumption
Mechanisms, Then and Now: From Metaphysics to Practice
For many old and new mechanists, Mechanism is both a metaphysical position and a thesis about scientific methodology. In this paper we discuss the relation between the metaphysics of mechanisms and the role of mechanical explanation in the practice of science, by presenting and comparing the key tenets of Old and New Mechanism. First, by focusing on the case of gravity, we show how the metaphysics of Old Mechanism constrained scientific explanation, and discuss Newtonâs critique of Old Mechanism. Second, we examine the current mechanistic metaphysics, arguing that it is not warranted by the use of the concept of mechanism in scientific practice, and motivate a thin conception of mechanism (the truly minimal view), according to which mechanisms are causal pathways for a certain effect or phenomenon. Finally, we draw analogies between Newtonâs critique of Old Mechanism and our thesis that the metaphysical commitments of New Mechanism are not necessary in order to illuminate scientific practice
- âŚ