1,448 research outputs found
Reductionism ad absurdum: Attneave and Dennett cannot reduce Homunculus (and hence the mind)
Purpose â Neuroscientists act as proxies for implied anthropomorphic signal- processing beings within the brain, Homunculi. The latter examine the arriving neuronal spike-trains to infer internal and external states. But a Homunculus needs a brain of its own, to coordinate its capabilities â a brain that necessarily contains a Homunculus and so on indefinitely. Such infinity is impossible â and in well-cited papers, Attneave and later Dennett claim to eliminate it. How do their approaches differ and do they (in fact) obviate the Homunculi?
Design/methodology/approach â The Attneave and Dennett approaches are carefully scrutinized. To Attneave, Homunculi are effectively âdecision-makingâ neurons that control behaviors. Attneave presumes that Homunculi, when successively nested, become successively âstupiderâ, limiting their numbers by diminishing their responsibilities. Dennett likewise postulates neuronal Homunculi that become âstupiderâ â but brain-wards, where greater sophistication might have been expected.
Findings â Attneaveâs argument is Reductionist and it simply assumes-away the Homuncular infinity. Dennettâs scheme, which evidently derives from Attneaveâs, ultimately involves the same mistakes. Attneave and Dennett fail, because they attempt to reduce intentionality to non-intentionality.
Research limitations/implications â Homunculus has been successively recognized over the centuries by philosophers, psychologists and (some) neuroscientists as a crucial conundrum of cognitive science. It still is.
Practical implications â Cognitive-science researchers need to recognize that Reductionist explanations of cognition may actually devolve to Homunculi, rather than eliminating them.
Originality/value â Two notable Reductionist arguments against the infinity of Homunculi are proven wrong. In their place, a non-Reductionist treatment of the mind, âEmergenceâ, is discussed as a means of rendering Homunculi irrelevant
Interpretation of absolute judgments using information theory: channel capacity or memory capacity?
Shannonâs information theory has been a popular component of first-order cybernetics. It quantifies information transmitted in terms of the number of times a sent symbol is received as itself, or as another possible symbol. Sent symbols were events and received symbols were outcomes. Garner and Hake reinterpreted Shannon, describing events and outcomes as categories of a stimulus attribute, so as to quantify the information transmitted in the psychologistâs category (or absolute judgment) experiment. There, categories are represented by specific stimuli, and the human subject must assign those stimuli, singly and in random order, to the categories that they represent. Hundreds of computations ensued of information transmitted and its alleged asymptote, the sensory channel capacity. The present paper critically re-examines those estimates. It also reviews estimates of memory capacity from memory experiments. It concludes that absolute judgment is memory-limited and that channel capacities are actually memory capacities. In particular, there are factors that affect absolute judgment that are not explainable within Shannonâs theory, factors such as feedback, practice, motivation, and stimulus range, as well as the anchor effect, sequential dependences, the rise in information transmitted with the increase in number of stimulus dimensions, and the phenomena of masking and stimulus duration dependence. It is recommended that absolute judgments be abandoned, because there are already many direct estimates of memory capacity
I, NEURON: the neuron as the collective
Purpose â In the last half-century, individual sensory neurons have been bestowed with characteristics of the whole human being, such as behavior and its oft-presumed precursor, consciousness. This anthropomorphization is pervasive in the literature. It is also absurd, given what we know about neurons, and it needs to be abolished. This study aims to first understand how it happened, and hence why it persists.
Design/methodology/approach â The peer-reviewed sensory-neurophysiology literature extends to hundreds (perhaps thousands) of papers. Here, more than 90 mainstream papers were scrutinized.
Findings â Anthropomorphization arose because single neurons were cast as âobserversâ who âidentifyâ, âcategorizeâ, ârecognizeâ, âdistinguishâ or âdiscriminateâ the stimuli, using math-based algorithms that reduce (âdecodeâ) the stimulus-evoked spike trains to the particular stimuli inferred to elicit them. Without âdecodingâ, there is supposedly no perception. However, âdecodingâ is both unnecessary and unconfirmed. The neuronal âobserverâ in fact consists of the laboratory staff and the greater society that supports them. In anthropomorphization, the neuron becomes the collective.
Research limitations/implications â Anthropomorphization underlies the widespread application to neurons Information Theory and Signal Detection Theory, making both approaches incorrect.
Practical implications â A great deal of time, money and effort has been wasted on anthropomorphic Reductionist approaches to understanding perception and consciousness. Those resources should be diverted into more-fruitful approaches.
Originality/value â A long-overdue scrutiny of sensory-neuroscience literature reveals that anthropomorphization, a form of Reductionism that involves the presumption of single-neuron consciousness, has run amok in neuroscience. Consciousness is more likely to be an emergent property of the brain
Information Theoryâs failure in neuroscience: on the limitations of cybernetics
In Cybernetics (1961 Edition), Professor Norbert Wiener noted that âThe role of information and the technique of measuring and transmitting information constitute a whole discipline for the engineer, for the neuroscientist, for the psychologist, and for the sociologistâ. Sociology aside, the neuroscientists and the psychologists inferred âinformation transmittedâ using the discrete summations from Shannon Information Theory. The present author has since scrutinized the psychologistsâ approach in depth, and found it wrong. The neuroscientistsâ approach is highly related, but remains unexamined. Neuroscientists quantified âthe ability of [physiological sensory] receptors (or other signal-processing elements) to transmit information about stimulus parametersâ. Such parameters could vary along a single continuum (e.g., intensity), or along multiple dimensions that altogether provide a Gestalt â such as a face. Here, unprecedented scrutiny is given to how 23 neuroscience papers computed âinformation transmittedâ in terms of stimulus parameters and the evoked neuronal spikes. The computations relied upon Shannonâs âconfusion matrixâ, which quantifies the fidelity of a âgeneral communication systemâ. Shannonâs matrix is square, with the same labels for columns and for rows. Nonetheless, neuroscientists labelled the columns by âstimulus categoryâ and the rows by âspike-count categoryâ. The resulting âinformation transmittedâ is spurious, unless the evoked spike-counts are worked backwards to infer the hypothetical evoking stimuli. The latter task is probabilistic and, regardless, requires that the confusion matrix be square. Was it? For these 23 significant papers, the answer is No
Homunculus strides again: why âinformation transmittedâ in neuroscience tells us nothing
Purpose â For half a century, neuroscientists have used Shannon Information Theory to calculate âinformation transmitted,â a hypothetical measure of how well neurons âdiscriminateâ amongst stimuli. Neuroscientistsâ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues.
Design/methodology/approach â Shannon Information Theory depends upon a physical model, Shannonâs âgeneral communication system.â Neuroscientistsâ interpretation of that model is scrutinized here.
Findings â In Shannonâs system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of âinformation transmitted.â Significantly, Shannonâs systemâs âreceptionâ (decoding) side physically mirrors its âtransmissionâ (encoding) side. However, neurons lack the âreceptionâ side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum â unless it is super-human. But any need for Homunculi, as in âtheories of consciousness,â is obviated if consciousness proves to be âemergent.â
Research limitations/implications â Neuroscientistsâ âinformation transmittedâ indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal.
Originality/value â A long-overdue examination unmasks a hidden element in neuroscientistsâ use of Shannon Information Theory, namely, Homunculus. Almost 50 yearsâ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding âdiscriminability.
Memory model of information transmitted in absolute judgment
Purpose â The purpose of this paper is to examine the popular âinformation transmittedâ interpretation of absolute judgments, and to provide an alternative interpretation if one is needed.
Design/methodology/approach â The psychologists Garner and Hake and their successors used Shannonâs Information Theory to quantify information transmitted in absolute judgments of sensory stimuli. Here, information theory is briefly reviewed, followed by a description of the absolute judgment experiment, and its information theory analysis. Empirical channel capacities are scrutinized. A remarkable coincidence, the similarity of maximum information transmitted to human memory
capacity, is described. Over 60 representative psychology papers on âinformation transmittedâ are inspected for evidence of memory involvement in absolute judgment. Finally, memory is conceptually integrated into absolute judgment through a novel qualitative model that correctly predicts how judgments change with increase in the number of judged stimuli.
Findings â Garner and Hake gave conflicting accounts of how absolute judgments represent information transmission. Further, âchannel capacityâ is an illusion caused by sampling bias and wishful thinking; information transmitted actually peaks and then declines, the peak coinciding with memory capacity. Absolute judgments themselves have numerous idiosyncracies that are incompatible with a Shannon general communication system but which clearly imply memory dependence.
Research limitations/implications â Memory capacity limits the correctness of absolute judgments. Memory capacity is already well measured by other means, making redundant the informational analysis of absolute judgments.
Originality/value â This paper presents a long-overdue comprehensive critical review of the established interpretation of absolute judgments in terms of âinformation transmittedâ. An inevitable conclusion is reached: that published measurements of information transmitted actually measure memory capacity. A new, qualitative model is offered for the role of memory in absolute judgments. The model is well supported by recently revealed empirical properties of absolute judgments
Paradigm versus praxis: why psychology âabsolute identificationâ experiments do not reveal sensory processes
Purpose â A key cybernetics concept, information transmitted in a system, was quantified by Shannon. It quickly gained prominence, inspiring a version by Harvard psychologists Garner and Hake for âabsolute identificationâ experiments. There, human subjects âcategorizeâ sensory stimuli, affording âinformation transmittedâ in perception. The Garner-Hake formulation has been in continuous use for 62 years, exerting enormous influence. But some experienced theorists and reviewers have criticized it as uninformative. They could not explain why, and were ignored. Here, the
âwhyâ is answered. The paper aims to discuss these issues.
Design/methodology/approach â A key Shannon data-organizing tool is the confusion matrix. Its columns and rows are, respectively, labeled by âsymbol sentâ (event) and âsymbol receivedâ (outcome), such that matrix entries represent how often outcomes actually corresponded to events. Garner and Hake made their own version of the matrix, which deserves scrutiny, and is minutely examined here.
Findings â The Garner-Hake confusion-matrix columns represent âstimulus categoriesâ, ranges of some physical stimulus attribute (usually intensity), and its rows represent âresponse categoriesâ of the subjectâs identification of the attribute. The matrix entries thus show how often an identification empirically corresponds to an intensity, such that âoutcomesâ and âeventsâ differ in kind (unlike Shannonâs). Obtaining a true âinformation transmittedâ therefore requires stimulus categorizations to be converted to hypothetical evoking stimuli, achievable (in principle) by relating categorization to sensation to intensity. But those relations are actually unknown, perhaps unknowable.
Originality/value â The author achieves an important understanding: why âabsolute identificationâ experiments do not illuminate sensory processes
Sensory Systems as Cybernetic Systems that Require Awareness of Alternatives to Interact with the World: Analysis of the Brain-Receptor Loop in Norwich's Entropy Theory of Perception
Introduction & Objectives: Norwichâs Entropy Theory of Perception (1975 [1] -present) stands alone. It explains many firing-rate behaviors and psychophysical laws from bare theory. To do so, it demands a unique sort of interaction between receptor and brain, one that Norwich never substantiated. Can it now be confirmed, given the accumulation of empirical sensory neuroscience? Background: Norwich conjoined sensation and a mathematical model of communication, Shannonâs Information Theory, as follows: âIn the entropic view of sensation, magnitude of sensation is regarded as a measure of the entropy or uncertainty of the stimulus signalâ [2]. âTo be uncertain about the outcome of an event, one must first be aware of a set of alternative outcomesâ [3]. âThe entropy-establishing process begins with the generation of a [internal] sensory signal by the stimulus generator. This is followed by receipt of the [external] stimulus by the sensory receptor, transmission of action potentials by the sensory neurons, and finally recapture of the [response to the internal] signal by the generatorâ [4]. The latter ârecaptureâ differentiates external from internal stimuli. The hypothetical âstimulus generatorsâ are internal emitters, that generate photons in vision, audible sounds in audition (to Norwich, the spontaneous otoacoustic emissions [SOAEs]), âtemperatures in excess of local skin temperatureâ in skin temperature sensation [4], etc. Method (1): Several decades of empirical sensory physiology literature was scrutinized for internal âstimulus generatorsâ. Results (1): Spontaneous photopigment isomerization (âdark lightâ) does not involve visible light. SOAEs are electromechanical basilar-membrane artefacts that rarely produce audible tones. The skinâs temperature sensors do not raise skin temperature, etc. Method (2): The putative action of the brain-and-sensory-receptor loop was carefully reexamined. Results (2): The sensory receptor allegedly âperceivesâ, experiences âawarenessâ, possesses âmemoryâ, and has a âmindâ. But those traits describe the whole human. The receptor, thus anthropomorphized, must therefore contain its own perceptual loop, containing a receptor, containing a perceptual loop, etc. Summary & Conclusions: The Entropy Theory demands sensory awareness of alternatives, through an imagined brain-and-sensory-receptor loop containing internal âstimulus generatorsâ. But (1) no internal âstimulus generatorsâ seem to exist and (2) the loop would be the outermost of an infinite nesting of identical loops
Norwichâs Entropy Theory: how not to go from abstract to actual
Purpose â The purpose of this paper is to ask whether a first-order-cybernetics concept, Shannonâs Information Theory, actually allows a far-reaching mathematics of perception allegedly derived from it, Norwich et al.âs âEntropy Theory of Perceptionâ.
Design/methodology/approach â All of The Entropy Theory, 35 years of publications, was scrutinized for its characterization of what underlies Shannon Information Theory: Shannonâs âgeneral communication systemâ. There, âeventsâ are passed by a âsourceâ to a âtransmitterâ, thence through a ânoisy channelâ to a âreceiverâ, that passes âoutcomesâ (received events) to a âdestinationâ.
Findings â In the entropy theory, âeventsâ were sometimes interactions with the stimulus, but could be microscopic stimulus conditions. âOutcomesâ often went unnamed; sometimes, the stimulus, or the interaction with it, or the resulting sensation, were âoutcomesâ. A âsourceâ was often implied to be a âtransmitterâ, which frequently was a primary afferent neuron; elsewhere, the stimulus was the âtransmitterâ and perhaps also the âsourceâ. âChannelâ was rarely named; once, it was the whole eye; once, the incident photons; elsewhere, the primary or secondary afferent. âReceiverâ was usually the sensory receptor, but could be an afferent. âDestinationâ went unmentioned. In sum, the entropy theoryâs idea of Shannonâs âgeneral communication systemâ was entirely ambiguous.
Research limitations/implications â The ambiguities indicate that, contrary to claim, the entropy theory cannot be an âinformation theoretical description of the process of perceptionâ.
Originality/value â Scrutiny of the entropy theoryâs use of information theory was overdue and reveals incompatibilities that force a reconsideration of information theoryâs possible role in perception models. A second-order-cybernetics approach is suggested
- âŠ