160,935 research outputs found

    View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    Get PDF
    The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identity-preserving transformations like depth-rotations. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations. While simulations of these models recapitulate the ventral stream's progression from early view-specific to late view-tolerant representations, they fail to generate the most salient property of the intermediate representation for faces found in the brain: mirror-symmetric tuning of the neural population to head orientation. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules can provide approximate invariance at the top level of the network. While most of the learning rules do not yield mirror-symmetry in the mid-level representations, we characterize a specific biologically-plausible Hebb-type learning rule that is guaranteed to generate mirror-symmetric tuning to faces tuning at intermediate levels of the architecture

    EFFICIENT DEPTH MAP COMPRESSION EXPLOITING CORRELATION WITH TEXTURE DATA IN MULTIRESOLUTION PREDICTIVE IMAGE CODERS

    Get PDF
    International audienceNew 3D applications such as 3DTV and FVV require not only a large amount of data, but also high-quality visual rendering. Based on one or several depth maps, intermediate views can be synthesized using a depth image-based rendering technique. Many compression schemes have been proposed for texture-plus-depth data, but the exploitation of the correlation between the two representations in enhancing compression performances is still an open research issue. In this paper, we present a novel compression scheme that aims at improving the depth coding using a joint depth/texture coding scheme. This method is an extension of the LAR (Locally Adaptive Resolution) codec, initially designed for 2D images. The LAR coding framework provides a lot of functionalities such as lossy/lossless compression, low complexity, resolution and quality scalability and quality control. Experimental results address both lossless and lossy compression aspects, considering some state of the art techniques in the two domains (JPEGLS, JPEGXR). Subjective results on the intermediate view synthesis after depth map coding show that the proposed method significantly improves the visual quality

    Towards Understanding Hierarchical Learning: Benefits of Neural Representations

    Full text link
    Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to "shallow learners" such as kernels. In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-pp polynomial (p4p \geq 4) in dd dimension, neural representation requires only O~(dp/2)\tilde{O}(d^{\lceil p/2 \rceil}) samples, while the best-known sample complexity upper bound for the raw input is O~(dp1)\tilde{O}(d^{p-1}). We contrast our result with a lower bound showing that neural representations do not improve over the raw input (in the infinite width limit), when the trainable network is instead a neural tangent kernel. Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.Comment: 41 pages, published in NeurIPS 202

    An Object-Oriented Framework for Explicit-State Model Checking

    Get PDF
    This paper presents a conceptual architecture for an object-oriented framework to support the development of formal verification tools (i.e. model checkers). The objective of the architecture is to support the reuse of algorithms and to encourage a modular design of tools. The conceptual framework is accompanied by a C++ implementation which provides reusable algorithms for the simulation and verification of explicit-state models as well as a model representation for simple models based on guard-based process descriptions. The framework has been successfully used to develop a model checker for a subset of PROMELA
    corecore