77,617 research outputs found

    Drawing cartoon faces - a functional imaging study of the cognitive neuroscience of drawing

    Full text link
    We report a functional imaging study of drawing cartoon faces. Normal, untrained participants were scanned while viewing simple black and white cartoon line-drawings of human faces, retaining them for a short memory interval, and then drawing them without vision of their hand or the paper. Specific encoding and retention of information about the faces was tested for by contrasting these two stages (with display of cartoon faces) against the exploration and retention of random dot stimuli. Drawing was contrasted between conditions in which only memory of a previously viewed face was available versus a condition in which both memory and simultaneous viewing of the cartoon was possible, and versus drawing of a new, previously unseen, face. We show that the encoding of cartoon faces powerfully activates the face sensitive areas of the lateral occipital cortex and the fusiform gyrus, but there is no significant activation in these areas during the retention interval. Activity in both areas was also high when drawing the displayed cartoons. Drawing from memory activates areas in posterior parietal cortex and frontal areas. This activity is consistent with the encoding and retention of the spatial information about the face to be drawn as a visuo-motor action plan, either representing a series of targets for ocular fixation or as spatial targets for the drawing actio

    Space and time in the parietal cortex: fMRI Evidence for a meural asymmetry

    Get PDF
    How are space and time related in the brain? This study contrasts two proposals that make different predictions about the interaction between spatial and temporal magnitudes. Whereas ATOM implies that space and time are symmetrically related, Metaphor Theory claims they are asymmetrically related. Here we investigated whether space and time activate the same neural structures in the inferior parietal cortex (IPC) and whether the activation is symmetric or asymmetric across domains. We measured participants’ neural activity while they made temporal and spatial judgments on the same visual stimuli. The behavioral results replicated earlier observations of a space-time asymmetry: Temporal judgments were more strongly influenced by irrelevant spatial information than vice versa. The BOLD fMRI data indicated that space and time activated overlapping clusters in the IPC and that, consistent with Metaphor Theory, this activation was asymmetric: The shared region of IPC was activated more strongly during temporal judgments than during spatial judgments. We consider three possible interpretations of this neural asymmetry, based on 3 possible functions of IPC

    Scalable Image Retrieval by Sparse Product Quantization

    Get PDF
    Fast Approximate Nearest Neighbor (ANN) search technique for high-dimensional feature indexing and retrieval is the crux of large-scale image retrieval. A recent promising technique is Product Quantization, which attempts to index high-dimensional image features by decomposing the feature space into a Cartesian product of low dimensional subspaces and quantizing each of them separately. Despite the promising results reported, their quantization approach follows the typical hard assignment of traditional quantization methods, which may result in large quantization errors and thus inferior search performance. Unlike the existing approaches, in this paper, we propose a novel approach called Sparse Product Quantization (SPQ) to encoding the high-dimensional feature vectors into sparse representation. We optimize the sparse representations of the feature vectors by minimizing their quantization errors, making the resulting representation is essentially close to the original data in practice. Experiments show that the proposed SPQ technique is not only able to compress data, but also an effective encoding technique. We obtain state-of-the-art results for ANN search on four public image datasets and the promising results of content-based image retrieval further validate the efficacy of our proposed method.Comment: 12 page

    Sparsity in Variational Autoencoders

    Full text link
    Working in high-dimensional latent spaces, the internal encoding of data in Variational Autoencoders becomes naturally sparse. We discuss this known but controversial phenomenon sometimes refereed to as overpruning, to emphasize the under-use of the model capacity. In fact, it is an important form of self-regularization, with all the typical benefits associated with sparsity: it forces the model to focus on the really important features, highly reducing the risk of overfitting. Especially, it is a major methodological guide for the correct tuning of the model capacity, progressively augmenting it to attain sparsity, or conversely reducing the dimension of the network removing links to zeroed out neurons. The degree of sparsity crucially depends on the network architecture: for instance, convolutional networks typically show less sparsity, likely due to the tighter relation of features to different spatial regions of the input.Comment: An Extended Abstract of this survey will be presented at the 1st International Conference on Advances in Signal Processing and Artificial Intelligence (ASPAI' 2019), 20-22 March 2019, Barcelona, Spai
    • …
    corecore