1,448 research outputs found

    Reductionism ad absurdum: Attneave and Dennett cannot reduce Homunculus (and hence the mind)

    Get PDF
    Purpose – Neuroscientists act as proxies for implied anthropomorphic signal- processing beings within the brain, Homunculi. The latter examine the arriving neuronal spike-trains to infer internal and external states. But a Homunculus needs a brain of its own, to coordinate its capabilities – a brain that necessarily contains a Homunculus and so on indefinitely. Such infinity is impossible – and in well-cited papers, Attneave and later Dennett claim to eliminate it. How do their approaches differ and do they (in fact) obviate the Homunculi? Design/methodology/approach – The Attneave and Dennett approaches are carefully scrutinized. To Attneave, Homunculi are effectively “decision-making” neurons that control behaviors. Attneave presumes that Homunculi, when successively nested, become successively “stupider”, limiting their numbers by diminishing their responsibilities. Dennett likewise postulates neuronal Homunculi that become “stupider” – but brain-wards, where greater sophistication might have been expected. Findings – Attneave’s argument is Reductionist and it simply assumes-away the Homuncular infinity. Dennett’s scheme, which evidently derives from Attneave’s, ultimately involves the same mistakes. Attneave and Dennett fail, because they attempt to reduce intentionality to non-intentionality. Research limitations/implications – Homunculus has been successively recognized over the centuries by philosophers, psychologists and (some) neuroscientists as a crucial conundrum of cognitive science. It still is. Practical implications – Cognitive-science researchers need to recognize that Reductionist explanations of cognition may actually devolve to Homunculi, rather than eliminating them. Originality/value – Two notable Reductionist arguments against the infinity of Homunculi are proven wrong. In their place, a non-Reductionist treatment of the mind, “Emergence”, is discussed as a means of rendering Homunculi irrelevant

    Interpretation of absolute judgments using information theory: channel capacity or memory capacity?

    Get PDF
    Shannon’s information theory has been a popular component of first-order cybernetics. It quantifies information transmitted in terms of the number of times a sent symbol is received as itself, or as another possible symbol. Sent symbols were events and received symbols were outcomes. Garner and Hake reinterpreted Shannon, describing events and outcomes as categories of a stimulus attribute, so as to quantify the information transmitted in the psychologist’s category (or absolute judgment) experiment. There, categories are represented by specific stimuli, and the human subject must assign those stimuli, singly and in random order, to the categories that they represent. Hundreds of computations ensued of information transmitted and its alleged asymptote, the sensory channel capacity. The present paper critically re-examines those estimates. It also reviews estimates of memory capacity from memory experiments. It concludes that absolute judgment is memory-limited and that channel capacities are actually memory capacities. In particular, there are factors that affect absolute judgment that are not explainable within Shannon’s theory, factors such as feedback, practice, motivation, and stimulus range, as well as the anchor effect, sequential dependences, the rise in information transmitted with the increase in number of stimulus dimensions, and the phenomena of masking and stimulus duration dependence. It is recommended that absolute judgments be abandoned, because there are already many direct estimates of memory capacity

    I, NEURON: the neuron as the collective

    Get PDF
    Purpose – In the last half-century, individual sensory neurons have been bestowed with characteristics of the whole human being, such as behavior and its oft-presumed precursor, consciousness. This anthropomorphization is pervasive in the literature. It is also absurd, given what we know about neurons, and it needs to be abolished. This study aims to first understand how it happened, and hence why it persists. Design/methodology/approach – The peer-reviewed sensory-neurophysiology literature extends to hundreds (perhaps thousands) of papers. Here, more than 90 mainstream papers were scrutinized. Findings – Anthropomorphization arose because single neurons were cast as “observers” who “identify”, “categorize”, “recognize”, “distinguish” or “discriminate” the stimuli, using math-based algorithms that reduce (“decode”) the stimulus-evoked spike trains to the particular stimuli inferred to elicit them. Without “decoding”, there is supposedly no perception. However, “decoding” is both unnecessary and unconfirmed. The neuronal “observer” in fact consists of the laboratory staff and the greater society that supports them. In anthropomorphization, the neuron becomes the collective. Research limitations/implications – Anthropomorphization underlies the widespread application to neurons Information Theory and Signal Detection Theory, making both approaches incorrect. Practical implications – A great deal of time, money and effort has been wasted on anthropomorphic Reductionist approaches to understanding perception and consciousness. Those resources should be diverted into more-fruitful approaches. Originality/value – A long-overdue scrutiny of sensory-neuroscience literature reveals that anthropomorphization, a form of Reductionism that involves the presumption of single-neuron consciousness, has run amok in neuroscience. Consciousness is more likely to be an emergent property of the brain

    Information Theory’s failure in neuroscience: on the limitations of cybernetics

    Get PDF
    In Cybernetics (1961 Edition), Professor Norbert Wiener noted that “The role of information and the technique of measuring and transmitting information constitute a whole discipline for the engineer, for the neuroscientist, for the psychologist, and for the sociologist”. Sociology aside, the neuroscientists and the psychologists inferred “information transmitted” using the discrete summations from Shannon Information Theory. The present author has since scrutinized the psychologists’ approach in depth, and found it wrong. The neuroscientists’ approach is highly related, but remains unexamined. Neuroscientists quantified “the ability of [physiological sensory] receptors (or other signal-processing elements) to transmit information about stimulus parameters”. Such parameters could vary along a single continuum (e.g., intensity), or along multiple dimensions that altogether provide a Gestalt – such as a face. Here, unprecedented scrutiny is given to how 23 neuroscience papers computed “information transmitted” in terms of stimulus parameters and the evoked neuronal spikes. The computations relied upon Shannon’s “confusion matrix”, which quantifies the fidelity of a “general communication system”. Shannon’s matrix is square, with the same labels for columns and for rows. Nonetheless, neuroscientists labelled the columns by “stimulus category” and the rows by “spike-count category”. The resulting “information transmitted” is spurious, unless the evoked spike-counts are worked backwards to infer the hypothetical evoking stimuli. The latter task is probabilistic and, regardless, requires that the confusion matrix be square. Was it? For these 23 significant papers, the answer is No

    Homunculus strides again: why ‘information transmitted’ in neuroscience tells us nothing

    Get PDF
    Purpose – For half a century, neuroscientists have used Shannon Information Theory to calculate “information transmitted,” a hypothetical measure of how well neurons “discriminate” amongst stimuli. Neuroscientists’ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues. Design/methodology/approach – Shannon Information Theory depends upon a physical model, Shannon’s “general communication system.” Neuroscientists’ interpretation of that model is scrutinized here. Findings – In Shannon’s system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of “information transmitted.” Significantly, Shannon’s system’s “reception” (decoding) side physically mirrors its “transmission” (encoding) side. However, neurons lack the “reception” side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum – unless it is super-human. But any need for Homunculi, as in “theories of consciousness,” is obviated if consciousness proves to be “emergent.” Research limitations/implications – Neuroscientists’ “information transmitted” indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal. Originality/value – A long-overdue examination unmasks a hidden element in neuroscientists’ use of Shannon Information Theory, namely, Homunculus. Almost 50 years’ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding “discriminability.

    Memory model of information transmitted in absolute judgment

    Get PDF
    Purpose – The purpose of this paper is to examine the popular “information transmitted” interpretation of absolute judgments, and to provide an alternative interpretation if one is needed. Design/methodology/approach – The psychologists Garner and Hake and their successors used Shannon’s Information Theory to quantify information transmitted in absolute judgments of sensory stimuli. Here, information theory is briefly reviewed, followed by a description of the absolute judgment experiment, and its information theory analysis. Empirical channel capacities are scrutinized. A remarkable coincidence, the similarity of maximum information transmitted to human memory capacity, is described. Over 60 representative psychology papers on “information transmitted” are inspected for evidence of memory involvement in absolute judgment. Finally, memory is conceptually integrated into absolute judgment through a novel qualitative model that correctly predicts how judgments change with increase in the number of judged stimuli. Findings – Garner and Hake gave conflicting accounts of how absolute judgments represent information transmission. Further, “channel capacity” is an illusion caused by sampling bias and wishful thinking; information transmitted actually peaks and then declines, the peak coinciding with memory capacity. Absolute judgments themselves have numerous idiosyncracies that are incompatible with a Shannon general communication system but which clearly imply memory dependence. Research limitations/implications – Memory capacity limits the correctness of absolute judgments. Memory capacity is already well measured by other means, making redundant the informational analysis of absolute judgments. Originality/value – This paper presents a long-overdue comprehensive critical review of the established interpretation of absolute judgments in terms of “information transmitted”. An inevitable conclusion is reached: that published measurements of information transmitted actually measure memory capacity. A new, qualitative model is offered for the role of memory in absolute judgments. The model is well supported by recently revealed empirical properties of absolute judgments

    Paradigm versus praxis: why psychology ‘absolute identification’ experiments do not reveal sensory processes

    Get PDF
    Purpose – A key cybernetics concept, information transmitted in a system, was quantified by Shannon. It quickly gained prominence, inspiring a version by Harvard psychologists Garner and Hake for “absolute identification” experiments. There, human subjects “categorize” sensory stimuli, affording “information transmitted” in perception. The Garner-Hake formulation has been in continuous use for 62 years, exerting enormous influence. But some experienced theorists and reviewers have criticized it as uninformative. They could not explain why, and were ignored. Here, the “why” is answered. The paper aims to discuss these issues. Design/methodology/approach – A key Shannon data-organizing tool is the confusion matrix. Its columns and rows are, respectively, labeled by “symbol sent” (event) and “symbol received” (outcome), such that matrix entries represent how often outcomes actually corresponded to events. Garner and Hake made their own version of the matrix, which deserves scrutiny, and is minutely examined here. Findings – The Garner-Hake confusion-matrix columns represent “stimulus categories”, ranges of some physical stimulus attribute (usually intensity), and its rows represent “response categories” of the subject’s identification of the attribute. The matrix entries thus show how often an identification empirically corresponds to an intensity, such that “outcomes” and “events” differ in kind (unlike Shannon’s). Obtaining a true “information transmitted” therefore requires stimulus categorizations to be converted to hypothetical evoking stimuli, achievable (in principle) by relating categorization to sensation to intensity. But those relations are actually unknown, perhaps unknowable. Originality/value – The author achieves an important understanding: why “absolute identification” experiments do not illuminate sensory processes

    Sensory Systems as Cybernetic Systems that Require Awareness of Alternatives to Interact with the World: Analysis of the Brain-Receptor Loop in Norwich's Entropy Theory of Perception

    Get PDF
    Introduction & Objectives: Norwich’s Entropy Theory of Perception (1975 [1] -present) stands alone. It explains many firing-rate behaviors and psychophysical laws from bare theory. To do so, it demands a unique sort of interaction between receptor and brain, one that Norwich never substantiated. Can it now be confirmed, given the accumulation of empirical sensory neuroscience? Background: Norwich conjoined sensation and a mathematical model of communication, Shannon’s Information Theory, as follows: “In the entropic view of sensation, magnitude of sensation is regarded as a measure of the entropy or uncertainty of the stimulus signal” [2]. “To be uncertain about the outcome of an event, one must first be aware of a set of alternative outcomes” [3]. “The entropy-establishing process begins with the generation of a [internal] sensory signal by the stimulus generator. This is followed by receipt of the [external] stimulus by the sensory receptor, transmission of action potentials by the sensory neurons, and finally recapture of the [response to the internal] signal by the generator” [4]. The latter “recapture” differentiates external from internal stimuli. The hypothetical “stimulus generators” are internal emitters, that generate photons in vision, audible sounds in audition (to Norwich, the spontaneous otoacoustic emissions [SOAEs]), “temperatures in excess of local skin temperature” in skin temperature sensation [4], etc. Method (1): Several decades of empirical sensory physiology literature was scrutinized for internal “stimulus generators”. Results (1): Spontaneous photopigment isomerization (“dark light”) does not involve visible light. SOAEs are electromechanical basilar-membrane artefacts that rarely produce audible tones. The skin’s temperature sensors do not raise skin temperature, etc. Method (2): The putative action of the brain-and-sensory-receptor loop was carefully reexamined. Results (2): The sensory receptor allegedly “perceives”, experiences “awareness”, possesses “memory”, and has a “mind”. But those traits describe the whole human. The receptor, thus anthropomorphized, must therefore contain its own perceptual loop, containing a receptor, containing a perceptual loop, etc. Summary & Conclusions: The Entropy Theory demands sensory awareness of alternatives, through an imagined brain-and-sensory-receptor loop containing internal “stimulus generators”. But (1) no internal “stimulus generators” seem to exist and (2) the loop would be the outermost of an infinite nesting of identical loops

    Norwich’s Entropy Theory: how not to go from abstract to actual

    Get PDF
    Purpose – The purpose of this paper is to ask whether a first-order-cybernetics concept, Shannon’s Information Theory, actually allows a far-reaching mathematics of perception allegedly derived from it, Norwich et al.’s “Entropy Theory of Perception”. Design/methodology/approach – All of The Entropy Theory, 35 years of publications, was scrutinized for its characterization of what underlies Shannon Information Theory: Shannon’s “general communication system”. There, “events” are passed by a “source” to a “transmitter”, thence through a “noisy channel” to a “receiver”, that passes “outcomes” (received events) to a “destination”. Findings – In the entropy theory, “events” were sometimes interactions with the stimulus, but could be microscopic stimulus conditions. “Outcomes” often went unnamed; sometimes, the stimulus, or the interaction with it, or the resulting sensation, were “outcomes”. A “source” was often implied to be a “transmitter”, which frequently was a primary afferent neuron; elsewhere, the stimulus was the “transmitter” and perhaps also the “source”. “Channel” was rarely named; once, it was the whole eye; once, the incident photons; elsewhere, the primary or secondary afferent. “Receiver” was usually the sensory receptor, but could be an afferent. “Destination” went unmentioned. In sum, the entropy theory’s idea of Shannon’s “general communication system” was entirely ambiguous. Research limitations/implications – The ambiguities indicate that, contrary to claim, the entropy theory cannot be an “information theoretical description of the process of perception”. Originality/value – Scrutiny of the entropy theory’s use of information theory was overdue and reveals incompatibilities that force a reconsideration of information theory’s possible role in perception models. A second-order-cybernetics approach is suggested
    • 

    corecore