40,770 research outputs found

    A method to determine algebraically integral Cayley digraphs on finite Abelian group

    Get PDF
    Researchers in the past have studied eigenvalues of Cayley digraphs or graphs. We are interested in characterizing Cayley digraphs on a finite Abelian group G whose eigenvalues are algebraic integers in a given number field K. And we succeed in finding a method to do so by proving Theorem 1. Also, the number of such Cayley digraphs is computed.Comment: 9 page

    Weight hierarchies of a family of linear codes associated with degenerate quadratic forms

    Full text link
    We restrict a degenerate quadratic form ff over a finite field of odd characteristic to subspaces. Thus, a quotient space related to ff is introduced. Then we get a non-degenerate quadratic form induced by ff over the quotient space. Some related results on the subspaces and quotient space are obtained. Based on this, we solve the weight hierarchies of a family of linear codes related to f.f.Comment: 12 page

    Task-set switching with natural scenes: Measuring the cost of deploying top-down attention

    Get PDF
    In many everyday situations, we bias our perception from the top down, based on a task or an agenda. Frequently, this entails shifting attention to a specific attribute of a particular object or scene. To explore the cost of shifting top-down attention to a different stimulus attribute, we adopt the task-set switching paradigm, in which switch trials are contrasted with repeat trials in mixed-task blocks and with single-task blocks. Using two tasks that relate to the content of a natural scene in a gray-level photograph and two tasks that relate to the color of the frame around the image, we were able to distinguish switch costs with and without shifts of attention. We found a significant cost in reaction time of 23–31 ms for switches that require shifting attention to other stimulus attributes, but no significant switch cost for switching the task set within an attribute. We conclude that deploying top-down attention to a different attribute incurs a significant cost in reaction time, but that biasing to a different feature value within the same stimulus attribute is effortless

    Recurrent Attention Models for Depth-Based Person Identification

    Get PDF
    We present an attention-based model that reasons on human body shape and motion dynamics to identify individuals in the absence of RGB information, hence in the dark. Our approach leverages unique 4D spatio-temporal signatures to address the identification problem across days. Formulated as a reinforcement learning task, our model is based on a combination of convolutional and recurrent neural networks with the goal of identifying small, discriminative regions indicative of human identity. We demonstrate that our model produces state-of-the-art results on several published datasets given only depth images. We further study the robustness of our model towards viewpoint, appearance, and volumetric changes. Finally, we share insights gleaned from interpretable 2D, 3D, and 4D visualizations of our model's spatio-temporal attention.Comment: Computer Vision and Pattern Recognition (CVPR) 201

    One-shot learning of object categories

    Get PDF
    Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully
    corecore