403,131 research outputs found

    Machine Learning for Neuroimaging with Scikit-Learn

    Get PDF
    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g. multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g. resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.Comment: Frontiers in neuroscience, Frontiers Research Foundation, 2013, pp.1

    Curiosity cloning: neural analysis of scientific knowledge

    Get PDF
    Event-related potentials (ERPs) are indicators of brain activity related to cognitive processes. They can be de- tected from EEG signals and thus constitute an attractive non-invasive option to study cognitive information pro- cessing. The P300 wave is probably the most celebrated example of an event-related potential and it is classically studied in connection to the odd-ball paradigm experi- mental protocol, able to consistently provoke the brain wave. We propose the use of P300 detection to identify the scientific interest in a large set of images and train a computer with machine learning algorithms using the subject’s responses to the stimuli as the training data set. As a first step, we here describe a number of experiments designed to relate the P300 brain wave to the cognitive processes related to placing a scientific judgment on a picture and to study the number of images per seconds that can be processed by such a system

    Representation Learning by Learning to Count

    Full text link
    We introduce a novel method for representation learning that uses an artificial supervision signal based on counting visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More specifically, we look for the representation that satisfies such relation rather than the transformations that match a given representation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The proposed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks.Comment: ICCV 2017(oral

    Sun as a Star: Science Learning Activities for Afterschool

    Get PDF
    This educator's guide features eight activities in which younger students use brainstorming, observations, and experiments to learn about the Sun. They will begin by learning that light is our means of studying the Sun, use spectroscopes to separate white light into its component colors, and learn that there are other forms of light outside the visible spectrum. Then the students will conduct experiments to learn how light travels and set up an outdoor investigation to find out how the size and position of shadows relate to the position of the Sun in the sky. In the final activities, they will construct a model to simulate the motion of the Sun relative to the Earth, view satellite images taken by the SOHO satellite, and extend their knowledge of the Sun as a star by observing images of stars and recording their ideas on whether all stars are like the Sun. Educational levels: Primary elementary, Intermediate elementary, Middle school

    The Shape of Art History in the Eyes of the Machine

    Full text link
    How does the machine classify styles in art? And how does it relate to art historians' methods for analyzing style? Several studies have shown the ability of the machine to learn and predict style categories, such as Renaissance, Baroque, Impressionism, etc., from images of paintings. This implies that the machine can learn an internal representation encoding discriminative features through its visual analysis. However, such a representation is not necessarily interpretable. We conducted a comprehensive study of several of the state-of-the-art convolutional neural networks applied to the task of style classification on 77K images of paintings, and analyzed the learned representation through correlation analysis with concepts derived from art history. Surprisingly, the networks could place the works of art in a smooth temporal arrangement mainly based on learning style labels, without any a priori knowledge of time of creation, the historical time and context of styles, or relations between styles. The learned representations showed that there are few underlying factors that explain the visual variations of style in art. Some of these factors were found to correlate with style patterns suggested by Heinrich W\"olfflin (1846-1945). The learned representations also consistently highlighted certain artists as the extreme distinctive representative of their styles, which quantitatively confirms art historian observations
    corecore