16 research outputs found

    Epälineaarisen visuaalisen prosessoinnin oppiminen luonnollisista kuvista

    Get PDF
    The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.Laskennallisen näön paradigma esittää, että mikä tahansa näkötoiminto - esimerkiksi jonkun esineen tunnistaminen - voidaan toistaa keinotekoisesti käyttäen laskennallisia menetelmiä. Minkälaisia nämä laskennalliset menetelmät voisivat olla, tai minkälaisia niiden tulisi olla? Tässä väitöskirjassa tutkitaan tilastollista lähestymistapaa näkemisen mekanismien muodostamiseen. Sovelletussa lähestymistavassa laskennallista käsittelyä yritetään muodostaa optimoimalla (tai 'oppimalla') siten, että toivotulle käsittelylle asetetaan erilaisia tavoitteita jonkin annetun luonnollisten kuvien joukon suhteen. Väitöskirja koostuu johdannosta ja seitsemästä kansainvälisillä foorumeilla julkaistusta tutkimusartikkelista. Johdanto esittelee väitöskirjan poikkitieteellistä tutkimusaluetta niille, jotka eivät entuudestaan tunne laskennallista näkötutkimusta. Johdannossa käydään läpi visuaalisen prosessoinnin haasteita sekä valotetaan hieman tämänhetkisiä mielipiteitä biologisista näkömekanismeista. Seuraavaksi lukija tutustutetaan työssä käytettyyn tutkimusmetodologiaan, jonka voi pitkälti nähdä koneoppimisen (tilastotieteen) soveltamisena. Johdannon lopuksi käydään läpi työn tutkimusartikkelit. Tämä katsaus on varustettu sellaisilla lisäkommenteilla, havainnoilla ja kritiikeillä, jotka eivät sisältyneet alkuperäisiin artikkeleihin. Varsinaiset tulokset väitöskirjassa liittyvät siihen, minkälaisia yksinkertaisia prosessointimekanismeja muodostuu yhdistelemällä erilaisia oppimistavoitteita, funktioluokkia, epälineaarisuuksia ja luonnollista kuvadataa. Työssä tarkastellaan erityisesti representaatioiden riippumattomuuteen ja harvuuteen tähtääviä oppimistavoitteita, mutta myös sellaisia, jotka pyrkivät edesauttamaan objektintunnistuksessa. Esitämme näiden aiheiden tiimoilta uusia löydöksiä, jotka listataan tarkemmin sekä englanninkielisessä tiivistelmässä että väitöskirjan alkusivuilla. Esitetty väitöskirjatyö tarjoaa lisänäyttöä siitä, että intuitiivisesti mielekkäitä visuaalisia prosessointimekanismeja voidaan muodostaa tilastollisin keinoin. Työ tarjoaa myös joitakin ennusteita ja ideoita liittyen biologisiin näkömekanismeihin

    Science of Facial Attractiveness

    Get PDF

    Varieties of Attractiveness and their Brain Responses

    Get PDF

    Visual attention in primates and for machines - neuronal mechanisms

    Get PDF
    Visual attention is an important cognitive concept for the daily life of humans, but still not fully understood. Due to this, it is also rarely utilized in computer vision systems. However, understanding visual attention is challenging as it has many and seemingly-different aspects, both at neuronal and behavioral level. Thus, it is very hard to give a uniform explanation of visual attention that can account for all aspects. To tackle this problem, this thesis has the goal to identify a common set of neuronal mechanisms, which underlie both neuronal and behavioral aspects. The mechanisms are simulated by neuro-computational models, thus, resulting in a single modeling approach to explain a wide range of phenomena at once. In the thesis, the chosen aspects are multiple neurophysiological effects, real-world object localization, and a visual masking paradigm (OSM). In each of the considered fields, the work also advances the current state-of-the-art to better understand this aspect of attention itself. The three chosen aspects highlight that the approach can account for crucial neurophysiological, functional, and behavioral properties, thus the mechanisms might constitute the general neuronal substrate of visual attention in the cortex. As outlook, our work provides for computer vision a deeper understanding and a concrete prototype of attention to incorporate this crucial aspect of human perception in future systems.:1. General introduction 2. The state-of-the-art in modeling visual attention 3. Microcircuit model of attention 4. Object localization with a model of visual attention 5. Object substitution masking 6. General conclusionVisuelle Aufmerksamkeit ist ein wichtiges kognitives Konzept für das tägliche Leben des Menschen. Es ist aber immer noch nicht komplett verstanden, so dass es ein langjähriges Ziel der Neurowissenschaften ist, das Phänomen grundlegend zu durchdringen. Gleichzeitig wird es aufgrund des mangelnden Verständnisses nur selten in maschinellen Sehsystemen in der Informatik eingesetzt. Das Verständnis von visueller Aufmerksamkeit ist jedoch eine komplexe Herausforderung, da Aufmerksamkeit äußerst vielfältige und scheinbar unterschiedliche Aspekte besitzt. Sie verändert multipel sowohl die neuronalen Feuerraten als auch das menschliche Verhalten. Daher ist es sehr schwierig, eine einheitliche Erklärung von visueller Aufmerksamkeit zu finden, welche für alle Aspekte gleichermaßen gilt. Um dieses Problem anzugehen, hat diese Arbeit das Ziel, einen gemeinsamen Satz neuronaler Mechanismen zu identifizieren, welche sowohl den neuronalen als auch den verhaltenstechnischen Aspekten zugrunde liegen. Die Mechanismen werden in neuro-computationalen Modellen simuliert, wodurch ein einzelnes Modellierungsframework entsteht, welches zum ersten Mal viele und verschiedenste Phänomene von visueller Aufmerksamkeit auf einmal erklären kann. Als Aspekte wurden in dieser Dissertation multiple neurophysiologische Effekte, Realwelt Objektlokalisation und ein visuelles Maskierungsparadigma (OSM) gewählt. In jedem dieser betrachteten Felder wird gleichzeitig der State-of-the-Art verbessert, um auch diesen Teilbereich von Aufmerksamkeit selbst besser zu verstehen. Die drei gewählten Gebiete zeigen, dass der Ansatz grundlegende neurophysiologische, funktionale und verhaltensbezogene Eigenschaften von visueller Aufmerksamkeit erklären kann. Da die gefundenen Mechanismen somit ausreichend sind, das Phänomen so umfassend zu erklären, könnten die Mechanismen vielleicht sogar das essentielle neuronale Substrat von visueller Aufmerksamkeit im Cortex darstellen. Für die Informatik stellt die Arbeit damit ein tiefergehendes Verständnis von visueller Aufmerksamkeit dar. Darüber hinaus liefert das Framework mit seinen neuronalen Mechanismen sogar eine Referenzimplementierung um Aufmerksamkeit in zukünftige Systeme integrieren zu können. Aufmerksamkeit könnte laut der vorliegenden Forschung sehr nützlich für diese sein, da es im Gehirn eine Aufgabenspezifische Optimierung des visuellen Systems bereitstellt. Dieser Aspekt menschlicher Wahrnehmung fehlt meist in den aktuellen, starken Computervisionssystemen, so dass eine Integration in aktuelle Systeme deren Leistung sprunghaft erhöhen und eine neue Klasse definieren dürfte.:1. General introduction 2. The state-of-the-art in modeling visual attention 3. Microcircuit model of attention 4. Object localization with a model of visual attention 5. Object substitution masking 6. General conclusio

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Biologically Plausible Cortical Hierarchical-Classifier Circuit Extensions in Spiking Neurons

    Get PDF
    Hierarchical categorization inter-leaved with sequence recognition of incoming stimuli in the mammalian brain is theorized to be performed by circuits composed of the thalamus and the six-layer cortex. Using these circuits, the cortex is thought to learn a ‘brain grammar’ composed of recursive sequences of categories. A thalamo-cortical, hierarchical classification and sequence learning “Core” circuit implemented as a linear matrix simulation and was published by Rodriguez, Whitson & Granger in 2004. In the brain, these functions are implemented by cortical and thalamic circuits composed of recurrently-connected, spiking neurons. The Neural Engineering Framework (NEF) (Eliasmith & Anderson, 2003) allows for the construction of large-scale biologically plausible neural networks. Existing NEF models of the basal-ganglia and the thalamus exist but to the best of our knowledge there does not exist an integrated, spiking-neuron, cortical-thalamic-Core network model. We construct a more biologically-plausible version of the hierarchical-classification function of the Core circuit using leaky-integrate-and-fire neurons which performs progressive visual classification of static image sequences relying on the neural activity levels to trigger the progressive classification of the stimulus. We proceed by implementing a recurrent NEF model of the cortical-thalamic Core circuit and then test the resulting model on the hierarchical categorization of images

    Normalization Among Heterogeneous Population Confers Stimulus Discriminability on the Macaque Face Patch Neurons

    Get PDF
    Primates are capable of recognizing faces even in highly cluttered natural scenes. In order to understand how the primate brain achieves face recognition despite this clutter, it is crucial to study the representation of multiple faces in face selective cortex. However, contrary to the essence of natural scenes, most experiments on face recognition literatures use only few faces at a time on a homogeneous background to study neural response properties. It thus remains unclear how face selective neurons respond to multiple stimuli, some of which might be encompassed by their receptive fields (RFs), others not. How is the neural representation of a face affected by the concurrent presence of other stimuli? Two lines of evidence lead to opposite predictions: first, given the importance of MAX-like operations for achieving selectivity and invariance, as suggested by feedforward circuitry for object recognition, face representations may not be compromised in the presence of clutter. On the other hand, the psychophysical crowding effect - the reduced discriminability (but not detectability) of an object in clutter - suggests that an object representation may be impaired by additional stimuli. To address this question, we conducted electrophysiological recordings in the macaque temporal lobe, where bilateral face selective areas are tightly interconnected to form a hierarchical face processing stream. Assisted by functional MRI, these face patches could be targeted for single-cell recordings. For each neuron, the most preferred face stimulus was determined, then presented at the center of the neuron\u27s RF. In addition, multiple stimuli (preferred or non-preferred) were presented in different numbers (0,1,2,4 or 8), from different categories (face or non-face object), or at different proximity (adjacent to or separated from the center stimulus). We found the majority of neurons reduced mean ring rates more (1) with increasing numbers of distractors, (2) with face distractors rather than with non-face object distractors, (3) at closer distractor proximity, and, additionally, (4) the response to multiple preferred faces depends on RF size. Although these findings in single neurons could indicate reduced discriminability, we found that each stimulus condition was well separated and decodable in a high-dimensional space spanned by the neural population. We showed that this was because neuronal population was quite heterogeneous, yet changing response systematically as stimulus parameter changed. Few neurons showed MAX-like behavior. These findings were explained by divisive normalization model, highlighting the importance of the modular structure of the primate temporal lobe. Taken together, these data and modeling results indicate that neurons in the face patches acquire stimulus discriminability by virtue of the modularity of cortical organization, heterogeneity within the population, and systematicity of the neural response

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics

    Improving gesture recognition through spatial focus of attention

    Get PDF
    2018 Fall.Includes bibliographical references.Gestures are a common form of human communication and important for human computer interfaces (HCI). Most recent approaches to gesture recognition use deep learning within multi- channel architectures. We show that when spatial attention is focused on the hands, gesture recognition improves significantly, particularly when the channels are fused using a sparse network. We propose an architecture (FOANet) that divides processing among four modalities (RGB, depth, RGB flow, and depth flow), and three spatial focus of attention regions (global, left hand, and right hand). The resulting 12 channels are fused using sparse networks. This architecture improves performance on the ChaLearn IsoGD dataset from a previous best of 67.71% to 82.07%, and on the NVIDIA dynamic hand gesture dataset from 83.8% to 91.28%. We extend FOANet to perform gesture recognition on continuous streams of data. We show that the best temporal fusion strategies for multi-channel networks depends on the modality (RGB vs depth vs flow field) and target (global vs left hand vs right hand) of the channel. The extended architecture achieves optimum performance using Gaussian Pooling for global channels, LSTMs for focused (left hand or right hand) flow field channels, and late Pooling for focused RGB and depth channels. The resulting system achieves a mean Jaccard Index of 0.7740 compared to the previous best result of 0.6103 on the ChaLearn ConGD dataset without first pre-segmenting the videos into single gesture clips. Human vision has α and β channels for processing different modalities in addition to spatial attention similar to FOANet. However, unlike FOANet, attention is not implemented through separate neural channels. Instead, attention is implemented through top-down excitation of neurons corresponding to specific spatial locations within the α and β channels. Motivated by the covert attention in human vision, we propose a new architecture called CANet (Covert Attention Net), that merges spatial attention channels while preserving the concept of attention. The focus layers of CANet allows it to focus attention on hands without having dedicated attention channels. CANet outperforms FOANet by achieving an accuracy of 84.79% on ChaLearn IsoGD dataset while being efficient (≈35% of FOANet parameters and ≈70% of FOANet operations). In addition to producing state-of-the-art results on multiple gesture recognition datasets, this thesis also tries to understand the behavior of multi-channel networks (a la FOANet). Multi- channel architectures are becoming increasingly common, setting the state of the art for performance in gesture recognition and other domains. Unfortunately, we lack a clear explanation of why multi-channel architectures outperform single channel ones. This thesis considers two hypotheses. The Bagging hypothesis says that multi-channel architectures succeed because they average the result of multiple unbiased weak estimators in the form of different channels. The Society of Experts (SoE) hypothesis suggests that multi-channel architectures succeed because the channels differentiate themselves, developing expertise with regard to different aspects of the data. Fusion layers then get to combine complementary information. This thesis presents two sets of experiments to distinguish between these hypotheses and both sets of experiments support the SoE hypothesis, suggesting multi-channel architectures succeed because their channels become specialized. Finally we demonstrate the practical impact of the gesture recognition techniques discussed in this thesis in the context of a sophisticated human computer interaction system. We developed a prototype system with a limited form of peer-to-peer communication in the context of blocks world. The prototype allows the users to communicate with the avatar using gestures and speech and make the avatar build virtual block structures

    Computer simulation of a neurological model of learning

    Get PDF
    A number of problems in psychology and neurology are discussed to orient the reader to a theory of neural integration. The importance is stressed of the comprehensive temporal and spatial integration of sensory, motor and motivational aspects of brain function. It is argued that an extended neural template theory could provide such an integration. Contemporary solutions to the problem of neural integration are discussed. The available knowledge concerning the structure of neural tissue leads to the description of a theory of neural integration which might provide such neural templates. Integrating Neurons are suggested to be organised in columns or pools. Sub-sets of Neurons are formed as a result of excitation and can preferentially exchange excitation. These sub-sets or Linked Constellations would act as spatial templates to be matched with subsequent states of excitation. Inhibition acts to restrict spike emission to the most highly activated sub-sets. An initial computer simulation represented a simple learning or classical conditioning situation. In a variety of test computer runs the performance confirmed the main predictions of the theoretical model. The model was then extended to include representation of instrumental, consummatory, motivational and other aspects of behaviour. The intention of these further simulations was not to demonstrate the predictions of prior formulations but rather to use the computer to develop simulations progressively able to represent behaviour. Difficulties were encountered which were remedied by incorporating rhythmic mechanisms. A number of different versions of the model were explored. It was shown that the models could be trained to produced a different response to discriminative cues, when those cues had previously signalled different contingencies of obtaining the opportunity to perform consummatory behaviour. A published experiment on the Spiral Illusion is reported, which confirmed predictions suggested by the model.<p
    corecore