36 research outputs found

    Brain2Pix: Fully convolutional naturalistic video reconstruction from brain activity

    Get PDF
    Reconstructing complex and dynamic visual perception from brain activity remains a major challenge in machine learning applications to neuroscience. Here we present a new method for reconstructing naturalistic images and videos from very large single-participant functional magnetic resonance data that leverages the recent success of image-to-image transformation networks. This is achieved by exploiting spatial information obtained from retinotopic mappings across the visual system. More specifically, we first determine what position each voxel in a particular region of interest would represent in the visual field based on its corresponding receptive field location. Then, the 2D image representation of the brain activity on the visual field is passed to a fully convolutional image-to-image network trained to recover the original stimuli using VGG feature loss with an adversarial regularizer. In our experiments, we show that our method offers a significant improvement over existing video reconstruction technique

    Real-world indoor mobility with simulated prosthetic vision:The benefits and feasibility of contour-based scene simplification at different phosphene resolutions

    Get PDF
    Contains fulltext : 246314.pdf (Publisher’s version ) (Open Access)Neuroprosthetic implants are a promising technology for restoring some form of vision in people with visual impairments via electrical neurostimulation in the visual pathway. Although an artificially generated prosthetic percept is relatively limited compared with normal vision, it may provide some elementary perception of the surroundings, re-enabling daily living functionality. For mobility in particular, various studies have investigated the benefits of visual neuroprosthetics in a simulated prosthetic vision paradigm with varying outcomes. The previous literature suggests that scene simplification via image processing, and particularly contour extraction, may potentially improve the mobility performance in a virtual environment. In the current simulation study with sighted participants, we explore both the theoretically attainable benefits of strict scene simplification in an indoor environment by controlling the environmental complexity, as well as the practically achieved improvement with a deep learning-based surface boundary detection implementation compared with traditional edge detection. A simulated electrode resolution of 26 x 26 was found to provide sufficient information for mobility in a simple environment. Our results suggest that, for a lower number of implanted electrodes, the removal of background textures and within-surface gradients may be beneficial in theory. However, the deep learning-based implementation for surface boundary detection did not improve mobility performance in the current study. Furthermore, our findings indicate that, for a greater number of electrodes, the removal of within-surface gradients and background textures may deteriorate, rather than improve, mobility. Therefore, finding a balanced amount of scene simplification requires a careful tradeoff between informativity and interpretability that may depend on the number of implanted electrodes.14 p

    Effect of in vitro gastrointestinal digestion on the total phenolic contents and antioxidant activity of wild Mediterranean edible plant extracts

    Get PDF
    The recent interest in wild edible plants is associated with their health benefits, which are mainly due to their richness in antioxidant compounds, particularly phenolics. Nevertheless, some of these compounds are metabolized after ingestion, being transformed into metabolites frequently with lower antioxidant activity. The aim of the present study was to evaluate the influence of the digestive process on the total phenolic contents and antioxidant activity of extracts from four wild edible plants used in the Mediterranean diet (Beta maritima L., Plantago major L., Oxalis pes-caprae L. and Scolymus hispanicus L.). HPLC-DAD analysis revealed that S. hispanicus is characterized by the presence of caffeoylquinic acids, dicaffeoylquinic acids and flavonol derivatives, P. major by high amounts of verbascoside, B. maritima possesses 2,4-dihydroxybenzoic acid, 5-O-caffeoylquinic acid, quercetin derivatives and kaempferol-3-O-rutinoside, and O. pes-caprae extract contains hydroxycinnamic acids and flavone derivatives. Total phenolic contents were determined by Folin-Ciocalteu assay, and antioxidant activity by the ABTS, DPPH, ORAC and FRAP assays. Phenolic contents of P. major and S. hispanicus extracts were not affected by digestion, but they significantly decreased in B. maritima after both phases of digestion process and in O. pes-caprae after the gastric phase. The antioxidant activity results varied with the extract and the method used to evaluate the activity. Results showed that P. major extract has the highest total phenolic contents and antioxidant activity, with considerable values even after digestion, reinforcing the health benefits of this species.European Union (FEDER funds through COMPETE)European Union (EU)European Union (FEDER)European Union (EU)Programa de Cooperacion Interreg V-A Espana - Portugal (POCTEP) 2014-2020 [0377_IBERPHENOL_6_E]project INTERREG - MD. Net: When Brand Meets PeopleFCT Portuguese Foundation for Science and Technolog

    Effects of complexity in perception: From construction to reconstruction

    Get PDF
    Contains fulltext : 195176.pdf (publisher's version ) (Open Access)Radboud University, 29 juni 2018Promotor : Bekkering, H. Co-promotor : Lier, R.J. van211 p

    Decomposing complexity preferences for music

    Get PDF
    Recently, we demonstrated complexity as a major factor for explaining individual differences in visual preferences for abstract digital art. We have shown that participants could best be separated into two groups based on their liking ratings for abstract digital art comprising geometric patterns: one group with a preference for complex visual patterns and another group with a preference for simple visual patterns. In the present study, building up on these results, we extended our investigations for complexity preferences from highly controlled visual stimuli to ecologically valid stimuli in the auditory modality. Similar to visual preferences, we showed that music preferences are highly influenced by stimulus complexity. We demonstrated this by clustering a large number of participants based on their liking ratings for song excerpts from various musical genres. Our results show that, based on their liking ratings, participants can best be separated into two groups: one group with a preference for more complex songs and another group with a preference for simpler songs. Finally, we considered various demographic and personal characteristics to explore differences between the groups, and reported that at least for the current data set age and gender to be significant factors separating the two groups

    Reconstructing perceived faces from brain activations with deep adversarial neural decoding

    Get PDF
    Contains fulltext : 179505.pdf (publisher's version ) (Open Access)Here, we present a novel approach to solve the problem of reconstructing perceived stimuli from brain responses by combining probabilistic inference with deep learning. Our approach first inverts the linear transformation from latent features to brain responses with maximum a posteriori estimation and then inverts the nonlinear transformation from perceived stimuli to latent features with adversarial training of convolutional neural networks. We test our approach with a functional magnetic resonance imaging experiment and show that it can generate state-of-the-art reconstructions of perceived faces from brain activations.NIPS 2017: 31st Annual Conference on Neural Information Processing Systems (Long Beach, California, December 4-9, 2017

    Algorithmic composition of polyphonic music with the WaveCRF

    Get PDF
    Contains fulltext : 179506.pdf (publisher's version ) (Open Access)Here, we propose a new approach for modeling conditional probability distributions of polyphonic music by combining WaveNET and CRF-RNN variants, and show that this approach beats LSTM and WaveNET baselines that do not take into account the statistical dependencies between simultaneous notes.NIPS 2017: 31st Annual Conference on Neural Information Processing Systems (Long Beach, California, December 4-9, 2017

    Representations of naturalistic stimulus complexity in early and associative visual and auditory cortices

    Get PDF
    The complexity of sensory stimuli has an important role in perception and cognition. However, its neural representation is not well understood. Here, we characterize the representations of naturalistic visual and auditory stimulus complexity in early and associative visual and auditory cortices. This is realized by means of encoding and decoding analyses of two fMRI datasets in the visual and auditory modalities. Our results implicate most early and some associative sensory areas in representing the complexity of naturalistic sensory stimuli. For example, parahippocampal place area, which was previously shown to represent scene features, is shown to also represent scene complexity. Similarly, posterior regions of superior temporal gyrus and superior temporal sulcus, which were previously shown to represent syntactic (language) complexity, are shown to also represent music (auditory) complexity. Furthermore, our results suggest the existence of gradients in sensitivity to naturalistic sensory stimulus complexity in these areas

    Emotion recognition with simulated phosphene vision

    No full text
    Contains fulltext : 215179.pdf (publisher's version ) (Closed access)Electrical stimulation of retina, optic nerve or cortex is found to elicit visual sensations, known as phosphenes. This allows visual prosthetics to partially restore vision by representing the visual field as a phosphene pattern. Since the resolution and performance of visual prostheses are limited, only a fraction of the information in a visual scene can be represented by phosphenes. Here, we propose a simple yet powerful image processing strategy for recognizing facial expressions with prosthetic vision, supporting communication and social interaction in the blind. A psychophysical study was conducted to investigate whether a landmark-based representation of facial expressions could improve emotion detection with prosthetic vision. Our approach was compared to edge detection, which is commonly used in current retinal prosthetic devices. Additionally, the relationship between the number of phosphenes and accuracy of emotion recognition was studied. The landmark model improved accuracy of emotion recognition, regardless of the number of phosphenes. Secondly, the accuracy improved with an increasing number of phosphenes up to a saturation point. The performance saturated with fewer phosphenes with the landmark model than with edge detection. These results suggest that landmark-based image pre-processing allows for a more efficient use of the limited information that can be stored in a phosphene pattern, providing a route towards more meaningful and higher-quality perceptual experience in subjects with prosthetic vision.MM '19: The 27th ACM International Conference on Multimedia (Nice, France, 21-25 October 2019

    Generative adversarial networks for reconstructing natural images from brain activity

    No full text
    Contains fulltext : 194019.pdf (publisher's version ) (Open Access)We explore a method for reconstructing visual stimuli from brain activity. Using large databases of natural images we trained a deep convolutional generative adversarial network capable of generating gray scale photos, similar to stimuli presented during two functional magnetic resonance imaging experiments. Using a linear model we learned to predict the generative model's latent space from measured brain activity. The objective was to create an image similar to the presented stimulus image through the previously trained generator. Using this approach we were able to reconstruct structural and some semantic features of a proportion of the natural images sets. A behavioural test showed that subjects were capable of identifying a reconstruction of the original stimulus in 67.2% and 66.4% of the cases in a pairwise comparison for the two natural image datasets respectively. Our approach does not require end-to-end training of a large generative model on limited neuroimaging data. Rapid advances in generative modeling promise further improvements in reconstruction performance.11 p
    corecore