52,560 research outputs found

    Distinguishing between cognitive explanations of the problem size effect in mental arithmetic via representational similarity analysis of fMRI data

    Get PDF
    Not all researchers interested in human behavior remain convinced that modern neuroimaging techniques have much to contribute to distinguishing between competing cognitive models for explaining human behavior, especially if one removes reverse inference from the table. Here, we took up this challenge in an attempt to distinguish between two competing accounts of the problem size effect (PSE), a robust finding in investigations of mathematical cognition. The PSE occurs when people solve arithmetic problems and indicates that numerically large problems are solved more slowly and erroneously than small problems. Neurocognitive explanations for the PSE can be categorized into representation-based and process-based views. Behavioral and traditional univariate neural measures have struggled to distinguish between these accounts. By contrast, a representational similarity analysis (RSA) approach with fMRI data provides competing hypotheses that can distinguish between accounts without recourse to reverse inference. To that end, our RSA (but not univariate) results provided clear evidence in favor of the representation-based over the process-based account of the PSE in multiplication; for addition, the results were less clear. Post-hoc similarity analysis distinguished still further between competing representation-based theoretical accounts. Namely, data favored the notion that individual multiplication problems are stored as individual memory traces sensitive to input frequency over a strictly magnitude-based account of memory encoding. Together, these results provide an example of how human neuroimaging evidence can directly inform cognitive-level explanations of a common behavioral phenomenon, the problem size effect. More broadly, these data may expand our understanding of calculation and memory systems in general

    Integrating Evolutionary Computation with Neural Networks

    Get PDF
    There is a tremendous interest in the development of the evolutionary computation techniques as they are well suited to deal with optimization of functions containing a large number of variables. This paper presents a brief review of evolutionary computing techniques. It also discusses briefly the hybridization of evolutionary computation and neural networks and presents a solution of a classical problem using neural computing and evolutionary computing technique

    Convolutional Networks for Object Category and 3D Pose Estimation from 2D Images

    Full text link
    Current CNN-based algorithms for recovering the 3D pose of an object in an image assume knowledge about both the object category and its 2D localization in the image. In this paper, we relax one of these constraints and propose to solve the task of joint object category and 3D pose estimation from an image assuming known 2D localization. We design a new architecture for this task composed of a feature network that is shared between subtasks, an object categorization network built on top of the feature network, and a collection of category dependent pose regression networks. We also introduce suitable loss functions and a training method for the new architecture. Experiments on the challenging PASCAL3D+ dataset show state-of-the-art performance in the joint categorization and pose estimation task. Moreover, our performance on the joint task is comparable to the performance of state-of-the-art methods on the simpler 3D pose estimation with known object category task

    Domain Generalization by Solving Jigsaw Puzzles

    Full text link
    Human adaptability relies crucially on the ability to learn and merge knowledge both from supervised and unsupervised learning: the parents point out few important concepts, but then the children fill in the gaps on their own. This is particularly effective, because supervised learning can never be exhaustive and thus learning autonomously allows to discover invariances and regularities that help to generalize. In this paper we propose to apply a similar approach to the task of object recognition across domains: our model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals how to solve a jigsaw puzzle on the same images. This secondary task helps the network to learn the concepts of spatial correlation while acting as a regularizer for the classification task. Multiple experiments on the PACS, VLCS, Office-Home and digits datasets confirm our intuition and show that this simple method outperforms previous domain generalization and adaptation solutions. An ablation study further illustrates the inner workings of our approach.Comment: Accepted at CVPR 2019 (oral
    • 

    corecore