56,697 research outputs found
TURING ENDURING
Turing Enduring: Information Processing by Brains & Machines
A Symposium Organized by Columbia, New York, and Rockefeller Universitieshttps://digitalcommons.rockefeller.edu/posters/1173/thumbnail.jp
Increasing power for voxel-wise genome-wide association studies : the random field theory, least square kernel machines and fast permutation procedures
Imaging traits are thought to have more direct links to genetic variation than diagnostic measures based on cognitive or clinical assessments and provide a powerful substrate to examine the influence of genetics on human brains. Although imaging genetics has attracted growing attention and interest, most brain-wide genome-wide association studies focus on voxel-wise single-locus approaches, without taking advantage of the spatial information in images or combining the effect of multiple genetic variants. In this paper we present a fast implementation of voxel- and cluster-wise inferences based on the random field theory to fully use the spatial information in images. The approach is combined with a multi-locus model based on least square kernel machines to associate the joint effect of several single nucleotide polymorphisms (SNP) with imaging traits. A fast permutation procedure is also proposed which significantly reduces the number of permutations needed relative to the standard empirical method and provides accurate small p-value estimates based on parametric tail approximation. We explored the relation between 448,294 single nucleotide polymorphisms and 18,043 genes in 31,662 voxels of the entire brain across 740 elderly subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Structural MRI scans were analyzed using tensor-based morphometry (TBM) to compute 3D maps of regional brain volume differences compared to an average template image based on healthy elderly subjects. We find method to be more sensitive compared with voxel-wise single-locus approaches. A number of genes were identified as having significant associations with volumetric changes. The most associated gene was GRIN2B, which encodes the N-methyl-d-aspartate (NMDA) glutamate receptor NR2B subunit and affects both the parietal and temporal lobes in human brains. Its role in Alzheimer's disease has been widely acknowledged and studied, suggesting the validity of the approach. The various advantages over existing approaches indicate a great potential offered by this novel framework to detect genetic influences on human brains
Minds, Brains and Programs
This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4
Alternative Techniques of Neural Signal Processing in Neuroengineering
Neural signal processing is a discipline within neuroengineering. This interdisciplinary approach combines principles from machine learning, signal processing theory, and computational neuroscience applied to problems in basic and clinical neuroscience. The ultimate goal of neuroengineering is a technological revolution, where machines would interact in real time with the brain. Machines and brains could interface, enabling normal function in cases of injury or disease, brain monitoring, and/or medical rehabilitation of brain disorders.
Much current research in neuroengineering is focused on understanding the coding and processing of information in the sensory and motor systems, quantifying how this processing is altered in the pathological state, and how it can be manipulated through interactions with artificial devices including brainācomputer interfaces and neuroprosthetics
Recommended from our members
Pioneers on the air: BBC radio broadcasts on computers and A.I., 1946-56
Between 1946 and 1956, a number of BBC radio broadcasts were made by pioneers in the fields of computing, artificial intelligence and cybernetics. Although no sound recordings of the broadcasts survive, transcripts are held at the BBC's Written Archives Centre at Caversham in the UK. This paper is based on a study of these transcripts, which have received little attention from historians.
The paper surveys the range of computer-related broadcasts during 1946ā1956 and discusses some recurring themes from the broadcasts, especially the relationship of 'artificial intelligence' to human intelligence. Additionally, it discusses the context of the broadcasts, both in relation to the BBC and to contemporary awareness of computers
Building machines that adapt and compute like brains
Building machines that learn and think like humans is essential not only for
cognitive science, but also for computational neuroscience, whose ultimate goal
is to understand how cognition is implemented in biological brains. A new
cognitive computational neuroscience should build cognitive-level and neural-
level models, understand their relationships, and test both types of models
with both brain and behavioral data.Comment: Commentary on: Lake BM, Ullman TD, Tenenbaum JB, Gershman SJ. (2017)
Building machines that learn and think like people. Behavioral and Brain
Sciences, 4
Brain-inspired conscious computing architecture
What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articonās claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human
Against simplicity and cognitive individualism: Nathaniel T. Wilcox
Neuroeconomics illustrates our deepening descent into the details of individual cognition. This descent is guided by the implicit assumption that āindividual humanā is the important āagentā of neoclassical economics. I argue here that this assumption is neither obviously correct, nor of primary importance to human economies. In particular I suggest that the main genius of the human species lies with its ability to distribute cognition across individuals, and to incrementally accumulate physical and social cognitive artifacts that largely obviate the innate biological limitations of individuals. If this is largely why our economies grow, then we should be much more interested in distributed cognition in human groups, and correspondingly less interested in individual cognition. We should also be much more interested in the cultural accumulation of cognitive artefacts: computational devices and media, social structures and economic institutions
- ā¦