318 research outputs found

    Simulation and Theory of Large-Scale Cortical Networks

    Get PDF
    Cerebral cortex is composed of intricate networks of neurons. These neuronal networks are strongly interconnected: every neuron receives, on average, input from thousands or more presynaptic neurons. In fact, to support such a number of connections, a majority of the volume in the cortical gray matter is filled by axons and dendrites. Besides the networks, neurons themselves are also highly complex. They possess an elaborate spatial structure and support various types of active processes and nonlinearities. In the face of such complexity, it seems necessary to abstract away some of the details and to investigate simplified models. In this thesis, such simplified models of neuronal networks are examined on varying levels of abstraction. Neurons are modeled as point neurons, both rate-based and spike-based, and networks are modeled as block-structured random networks. Crucially, on this level of abstraction, the models are still amenable to analytical treatment using the framework of dynamical mean-field theory. The main focus of this thesis is to leverage the analytical tractability of random networks of point neurons in order to relate the network structure, and the neuron parameters, to the dynamics of the neurons—in physics parlance, to bridge across the scales from neurons to networks. More concretely, four different models are investigated: 1) fully connected feedforward networks and vanilla recurrent networks of rate neurons; 2) block-structured networks of rate neurons in continuous time; 3) block-structured networks of spiking neurons; and 4) a multi-scale, data-based network of spiking neurons. We consider the first class of models in the light of Bayesian supervised learning and compute their kernel in the infinite-size limit. In the second class of models, we connect dynamical mean-field theory with large-deviation theory, calculate beyond mean-field fluctuations, and perform parameter inference. For the third class of models, we develop a theory for the autocorrelation time of the neurons. Lastly, we consolidate data across multiple modalities into a layer- and population-resolved model of human cortex and compare its activity with cortical recordings. In two detours from the investigation of these four network models, we examine the distribution of neuron densities in cerebral cortex and present a software toolbox for mean-field analyses of spiking networks

    A sensorimotor account of visual attention in natural behaviour

    Get PDF
    The real-world sensorimotor paradigm is based on the premise that sufficient ecological complexity is a prerequisite for inducing naturally relevant sensorimotor relations in the experimental context. The aim of this thesis is to embed visual attention research within the real-world sensorimotor paradigm using an innovative mobile gaze-tracking system (EyeSeeCam, Schneider et al., 2009). Common laboratory set-ups in the field of attention research fail to create natural two-way interaction between observer and situation because they deliver pre-selected stimuli and human observer is essentially neutral or passive. EyeSeeCam, by contrast, permits an experimental design whereby the observer freely and spontaneously engages in real-world situations. By aligning a video camera in real time to the movements of the eyes, the system directly measures the observer’s perspective in a video recording and thus allows us to study vision in the context of authentic human behaviour, namely as resulting from past actions and as originating future actions. The results of this thesis demonstrate that (1) humans, when freely exploring natural environments, prefer directing their attention to local structural features of the world, (2) eyes, head and body perform distinct functions throughout this process, and (3) coordinated eye and head movements do not fully stabilize but rather continuously adjust the retinal image also during periods of quasi-stable “fixation”. These findings validate and extend the common laboratory concept of feature salience within whole-body sensorimotor actions outside the laboratory. Head and body movements roughly orient gaze, potentially driven by early stages of processing. The eyes then fine-tune the direction of gaze, potentially during higher-level stages of visual-spatial behaviour (Studies 1 and 2). Additional head-centred recordings reveal distinctive spatial biases both in the visual stimulation and the spatial allocation of gaze generated in a particular real-world situation. These spatial structures may result both from the environment and form the idiosyncrasies of the natural behaviour afforded by the situation. By contrast, when the head-centred videos are re-played as stimuli in the laboratory, gaze directions reveal a bias towards the centre of the screen. This “central bias” is likely a consequence of the laboratory set-up with its limitation to eye-in-head movements and its restricted screen (Study 3). Temporal analysis of natural visual behaviour reveals frequent synergistic interactions of eye and head that direct rather than stabilize gaze in the quasi-stable eye movement periods following saccades, leading to rich temporal dynamics of real-world retinal input (Study 4) typically not addressed in laboratory studies. Direct comparison to earlier data with respect to the visual system of cats (CatCam), frequently taken as proxy for human vision, shows that stabilizing eye movements play an even less dominant role in the natural behaviour of cats. This highlights the importance of realistic temporal dynamics of vision for models and experiments (Study 5). The approach and findings presented in this thesis demonstrate the need for and feasibility of real- world research on visual attention. Real-world paradigms permit the identification of relevant features triggered in the natural interplay between internal-physiological and external-situational sensorimotor factors. Realistic spatial and temporal characteristics of eye, head and body interactions are essential qualitative properties of reliable sensorimotor models of attention but difficult to obtain under laboratory conditions. Taken together, the data and theory presented in this thesis suggest that visual attention does not represent a pre-processing stage of object recognition but rather is an integral component of embodied action in the real world

    Towards Comprehensive Foundations of Computational Intelligence

    Full text link
    Abstract. Although computational intelligence (CI) covers a vast variety of different methods it still lacks an integrative theory. Several proposals for CI foundations are discussed: computing and cognition as compression, meta-learning as search in the space of data models, (dis)similarity based methods providing a framework for such meta-learning, and a more general approach based on chains of transformations. Many useful transformations that extract information from features are discussed. Heterogeneous adaptive systems are presented as particular example of transformation-based systems, and the goal of learning is redefined to facilitate creation of simpler data models. The need to understand data structures leads to techniques for logical and prototype-based rule extraction, and to generation of multiple alternative models, while the need to increase predictive power of adaptive models leads to committees of competent models. Learning from partial observations is a natural extension towards reasoning based on perceptions, and an approach to intuitive solving of such problems is presented. Throughout the paper neurocognitive inspirations are frequently used and are especially important in modeling of the higher cognitive functions. Promising directions such as liquid and laminar computing are identified and many open problems presented.

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    An Electroencephalogram (EEG) Based Biometrics Investigation for Authentication: A Human-Computer Interaction (HCI) Approach

    Get PDF
    Encephalogram (EEG) devices are one of the active research areas in human-computer interaction (HCI). They provide a unique brain-machine interface (BMI) for interacting with a growing number of applications. EEG devices interface with computational systems, including traditional desktop computers and more recently mobile devices. These computational systems can be targeted by malicious users. There is clearly an opportunity to leverage EEG capabilities for increasing the efficiency of access control mechanisms, which are the first line of defense in any computational system. Access control mechanisms rely on a number of authenticators, including “what you know”, “what you have”, and “what you are”. The “what you are” authenticator, formally known as a biometrics authenticator, is increasingly gaining acceptance. It uses an individual’s unique features such as fingerprints and facial images to properly authenticate users. An emerging approach in physiological biometrics is cognitive biometrics, which measures brain’s response to stimuli. These stimuli can be measured by a number of devices, including EEG systems. This work shows an approach to authenticate users interacting with their computational devices through the use of EEG devices. The results demonstrate the feasibility of using a unique hard-to-forge trait as an absolute biometrics authenticator by exploiting the signals generated by different areas of the brain when exposed to visual stimuli. The outcome of this research highlights the importance of the prefrontal cortex and temporal lobes to capture unique responses to images that trigger emotional responses. Additionally, the utilization of logarithmic band power processing combined with LDA as the machine learning algorithm provides higher accuracy when compared against common spatial patterns or windowed means processing in combination with GMM and SVM machine learning algorithms. These results continue to validate the value of logarithmic band power processing and LDA when applied to oscillatory processes

    Numerical modelling of additive manufacturing process for stainless steel tension testing samples

    Get PDF
    Nowadays additive manufacturing (AM) technologies including 3D printing grow rapidly and they are expected to replace conventional subtractive manufacturing technologies to some extents. During a selective laser melting (SLM) process as one of popular AM technologies for metals, large amount of heats is required to melt metal powders, and this leads to distortions and/or shrinkages of additively manufactured parts. It is useful to predict the 3D printed parts to control unwanted distortions and shrinkages before their 3D printing. This study develops a two-phase numerical modelling and simulation process of AM process for 17-4PH stainless steel and it considers the importance of post-processing and the need for calibration to achieve a high-quality printing at the end. By using this proposed AM modelling and simulation process, optimal process parameters, material properties, and topology can be obtained to ensure a part 3D printed successfully

    Laboratory Directed Research and Development Program FY 2008 Annual Report

    Full text link
    • 

    corecore