102 research outputs found

    Second order scattering descriptors predict fMRI activity due to visual textures

    Get PDF
    Second layer scattering descriptors are known to provide good classification performance on natural quasi-stationary processes such as visual textures due to their sensitivity to higher order moments and continuity with respect to small deformations. In a functional Magnetic Resonance Imaging (fMRI) experiment we present visual textures to subjects and evaluate the predictive power of these descriptors with respect to the predictive power of simple contour energy - the first scattering layer. We are able to conclude not only that invariant second layer scattering coefficients better encode voxel activity, but also that well predicted voxels need not necessarily lie in known retinotopic regions.Comment: 3nd International Workshop on Pattern Recognition in NeuroImaging (2013

    Seeing it all: Convolutional network layers map the function of the human visual system

    Get PDF
    International audienceConvolutional networks used for computer vision represent candidate models for the computations performed in mammalian visual systems. We use them as a detailed model of human brain activity during the viewing of natural images by constructing predictive models based on their different layers and BOLD fMRI activations. Analyzing the predictive performance across layers yields characteristic fingerprints for each visual brain region: early visual areas are better described by lower level convolutional net layers and later visual areas by higher level net layers, exhibiting a progression across ventral and dorsal streams. Our predictive model generalizes beyond brain responses to natural images. We illustrate this on two experiments, namely retinotopy and face-place oppositions, by synthesizing brain activity and performing classical brain mapping upon it. The synthesis recovers the activations observed in the corresponding fMRI studies, showing that this deep encoding model captures representations of brain function that are universal across experimental paradigms

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Vision : a model to study cognition

    Get PDF
    Our senses – vision, audition, touch, taste and smell – constantly receive a large amount of information. This information is processed and used in order to guide our actions. Cognitive sciences consist in studying mental abilities through different disciplines, e.g. linguistic, neuropsychology, neuroscience or modelling. Each discipline considers mental phenomena and their physical substrate, the nervous system, as a tool to process information in order to guide behavior adaptively (Collins, Andler, & Tallon-Baudry, 2018). Cognitive functions are a collection of processing systems serving different goals, and whose interactions are key to the complexity of cognition. Studying cognition often implies operationalizing each of these functions separately. For example, memory allows to store and reuse information, and attention allows to select relevant information for the task at hand, and to facilitate its processing. To characterize the processes of specific cognitive functions, it is thus necessary to provide to the studied subject – here we concentrate on human and non-human primates – an information to be processed, through different sensory modalities. In this essay, we concentrate on vision as a unique model to study cognition through different fields of cognitive sciences, from cognitive psychology to neurosciences, mentioning also briefly modeling and neuropsychology. Our objective is not to do an exhaustive description of the visual system, nor to compare in detail vision with other sensory modalities, but to argue that the accumulation of evidence on the visual system, as well as its characteristic perceptual, algorithmic and physiological organization, make it a particularly rich model to study cognitive functions. After a brief presentation of some properties of vision, we will illustrate our argument focusing on a specific cognitive function: attention, and in particular its study in cognitive psychology and neuroscience. We will discuss how our knowledge of vision allowed us to understand the behavioral and neuronal mechanisms underlying attentional selection and facilitation of information. We will finally conclude that sensory systems can be used as models to study cognition in different fields of cognitive sciences.Nos diffĂ©rents sens−la vue, l’audition, le toucher, le goĂ»t, l’odorat− reçoivent constamment un flux massif d’informations. Toutes ces informations sont traitĂ©es et utilisĂ©es afin de guider nos actions. Les sciences cognitives reprĂ©sentent l’étude de ces facultĂ©s mentales par le prisme de diffĂ©rentes disciplines, par exemple linguistique, neuropsychologie, neuroscience ou modĂ©lisation. Chacune de ces disciplines considĂšre les phĂ©nomĂšnes mentaux et leur substrat physique, le systĂšme nerveux, comme un outil de traitement de l’information ayant pour but de guider le comportement de façon adaptative (Collins, Andler, & Tallon-Baudry, 2018). Les fonctions cognitives constituent ainsi une collection de systĂšmes de traitement de l'information servant diffĂ©rents buts, et dont les interactions sont Ă  l’origine de la complexitĂ© de la cognition. L ’ Ă©tude de la cognition passe souvent par l’opĂ©rationnalisation de chacune de ces fonctions sĂ©parĂ©ment. Par exemple, la mĂ©moire permet de stocker et de rĂ©utiliser l’information, et l’attention permet de sĂ©lectionner celle qui est pertinente pour la tĂąche Ă  effectuer, et d’en faciliter son traitement. Afin de caractĂ©riser les processus propres Ă  une fonction cognitive donnĂ©e, il est alors nĂ©cessaire de fournir au sujet d’étude − ici nous nous concentrerons sur le primate humain et non-humain − une information Ă  traiter, via diffĂ©rentes modalitĂ©s sensorielles. Dans cet article d’opinion, nous nous concentrons sur la vision comme modĂšle d’étude singulier de la cognition Ă  travers diffĂ©rents champs des sciences cognitives, de la psychologie cognitive aux neurosciences, en passant briĂšvement par la modĂ©lisation et la neuropsychologie. Notre objectif n’est pas de faire une description exhaustive de la modalitĂ© visuelle ni de faire une comparaison dĂ©taillĂ©e avec les autres modalitĂ©s sensorielles, mais d’argumenter que l’accumulation des connaissances que nous en avons, ainsi que son organisation caractĂ©ristique du point de vue perceptif, algorithmique et physiologique, en font un modĂšle particuliĂšrement riche de l’étude des fonctions cognitives. AprĂšs une brĂšve prĂ©sentation de certaines bases de la vision, nous illustrerons notre argument en nous concentrant sur une fonction cognitive spĂ©cifique : l’attention, et en particulier, son Ă©tude en psychologie cognitive et neurosciences. Nous aborderons notamment la façon grĂące Ă  laquelle nos connaissances sur la vision nous ont permis de comprendre les mĂ©canismes comportementaux et neuronaux qui sous-tendent la sĂ©lection de l’information par l’attention, et la facilitation de son traitement. Nous conclurons que les systĂšmes sensoriels peuvent ĂȘtre utilisĂ©s comme modĂšles d’étude de la cognition dans divers domaines des sciences cognitives

    How touch and hearing influence visual processing in sensory substitution, synaesthesia and cross-modal correspondences

    Get PDF
    Sensory substitution devices (SSDs) systematically turn visual dimensions into patterns of tactile or auditory stimulation. After training, a user of these devices learns to translate these audio or tactile sensations back into a mental visual picture. Most previous SSDs translate greyscale images using intuitive cross-sensory mappings to help users learn the devices. However more recent SSDs have started to incorporate additional colour dimensions such as saturation and hue. Chapter two examines how previous SSDs have translated the complexities of colour into hearing or touch. The chapter explores if colour is useful for SSD users, how SSD and veridical colour perception differ and how optimal cross-sensory mappings might be considered. After long-term training, some blind users of SSDs report visual sensations from tactile or auditory stimulation. A related phenomena is that of synaesthesia, a condition where stimulation of one modality (i.e. touch) produces an automatic, consistent and vivid sensation in another modality (i.e. vision). Tactile-visual synaesthesia is an extremely rare variant that can shed light on how the tactile-visual system is altered when touch can elicit visual sensations. Chapter three reports a series of investigations on the tactile discrimination abilities and phenomenology of tactile-vision synaesthetes, alongside questionnaire data from synaesthetes unavailable for testing. Chapter four introduces a new SSD to test if the presentation of colour information in sensory substitution affects object and colour discrimination. Chapter five presents experiments on intuitive auditory-colour mappings across a wide variety of sounds. These findings are used to predict the reported colour hallucinations resulting from LSD use while listening to these sounds. Chapter six uses a new sensory substitution device designed to test the utility of these intuitive sound-colour links for visual processing. These findings are discussed with reference to how cross-sensory links, LSD and synaesthesia can inform optimal SSD design for visual processing

    Activity in area V3A predicts positions of moving objects

    Get PDF
    No description supplie

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
    • 

    corecore