23 research outputs found

    Making AI Meaningful Again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy

    Making AI meaningful again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s, but this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy

    Making AI meaningful again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy.Comment: 23 pages, 1 Tabl

    Making AI meaningful again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s, but this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy

    Warranted Diagnosis

    Get PDF
    A diagnostic process is an investigative process that takes a clinical picture as input and outputs a diagnosis. We propose a method for distinguishing diagnoses that are warranted from those that are not, based on the cognitive processes of which they are the outputs. Processes designed and vetted to reliably produce correct diagnoses will output what we shall call ‘warranted diagnoses’. The latter are diagnoses that should be trusted even if they later turn out to have been wrong. Our work is based on the recently developed Cognitive Process Ontology and further develops the Ontology of General Medical Science. It also has applications in fields such as intelligence, forensics, and predictive maintenance, all of which rely on vetted processes designed to secure the reliability of their outputs

    How neurons in deep models relate with neurons in the brain

    Get PDF
    In dealing with the algorithmic aspects of intelligent systems, the analogy with the biological brain has always been attractive, and has often had a dual function. On the one hand, it has been an effective source of inspiration for their design, while, on the other hand, it has been used as the justification for their success, especially in the case of Deep Learning (DL) models. However, in recent years, inspiration from the brain has lost its grip on its first role, yet it continues to be proposed in its second role, although we believe it is also becoming less and less defensible. Outside the chorus, there are theoretical proposals that instead identify important demarcation lines between DL and human cognition, to the point of being even incommensurable. In this article we argue that, paradoxically, the partial indifference of the developers of deep neural models to the functioning of biological neurons is one of the reasons for their success, having promoted a pragmatically opportunistic attitude. We believe that it is even possible to glimpse a biological analogy of a different kind, in that the essentially heuristic way of proceeding in modern DL development bears intriguing similarities to natural evolution

    Neural Chitchat

    Get PDF
    A constant theme in Sherry Turkle’s work is the idea that computers shape our social and psychological lives. This idea is of course in a sense trivial, as can be observed when walking down any city street and noting how many of the passers-by have their heads buried in screens. In The Second Self, however, Turkle makes a stronger claim to the effect that where people confront machines that seem to think this suggests a new way for us to think – about human thought, emotion, memory, and understanding and thereby affects the way we think and see ourselves as humans. I here attempt here to throw a sceptical light on claims of this sort by examining the Chinese chatbot “Xiǎoice”, which is described by its authors as “the most popular social chatbot in the world”

    Why machines do not understand: A response to Søgaard

    Get PDF
    Some defenders of so-called `artificial intelligence' believe that machines can understand language. In particular, Søgaard has argued in his "Understanding models understanding language" (2022) for a thesis of this sort. His idea is that (1) where there is semantics there is also understanding and (2) machines are not only capable of what he calls `inferential semantics', but even that they can (with the help of inputs from sensors) `learn' referential semantics. We show that he goes wrong because he pays insufficient attention to the difference between language as used by humans and the sequences of inert symbols which arise when language is stored on hard drives or in books in libraries
    corecore