185,091 research outputs found

    Divide and Fuse: A Re-ranking Approach for Person Re-identification

    Full text link
    As re-ranking is a necessary procedure to boost person re-identification (re-ID) performance on large-scale datasets, the diversity of feature becomes crucial to person reID for its importance both on designing pedestrian descriptions and re-ranking based on feature fusion. However, in many circumstances, only one type of pedestrian feature is available. In this paper, we propose a "Divide and use" re-ranking framework for person re-ID. It exploits the diversity from different parts of a high-dimensional feature vector for fusion-based re-ranking, while no other features are accessible. Specifically, given an image, the extracted feature is divided into sub-features. Then the contextual information of each sub-feature is iteratively encoded into a new feature. Finally, the new features from the same image are fused into one vector for re-ranking. Experimental results on two person re-ID benchmarks demonstrate the effectiveness of the proposed framework. Especially, our method outperforms the state-of-the-art on the Market-1501 dataset.Comment: Accepted by BMVC201

    Deep View-Sensitive Pedestrian Attribute Inference in an end-to-end Model

    Full text link
    Pedestrian attribute inference is a demanding problem in visual surveillance that can facilitate person retrieval, search and indexing. To exploit semantic relations between attributes, recent research treats it as a multi-label image classification task. The visual cues hinting at attributes can be strongly localized and inference of person attributes such as hair, backpack, shorts, etc., are highly dependent on the acquired view of the pedestrian. In this paper we assert this dependence in an end-to-end learning framework and show that a view-sensitive attribute inference is able to learn better attribute predictions. Our proposed model jointly predicts the coarse pose (view) of the pedestrian and learns specialized view-specific multi-label attribute predictions. We show in an extensive evaluation on three challenging datasets (PETA, RAP and WIDER) that our proposed end-to-end view-aware attribute prediction model provides competitive performance and improves on the published state-of-the-art on these datasets.Comment: accepted BMVC 201

    COUPLING CHIRAL BOSONS TO GRAVITY

    Get PDF
    The chiral boson actions of Floreanini and Jackiw (FJ), and of McClain,Wu and Yu (MWY) have been recently shown to be different representations of the same chiral boson theory. MWY displays manifest covariance and also a (gauge) symmetry that is hidden in the FJ side, which, on the other hand, displays the physical spectrum in a simple manner. We make use of the covariance of the MWY representation for the chiral boson to couple it to background gravity showing explicitly the equivalence with the previous results for the FJ representationComment: 8 pages, Latex, no figure

    Audrey Yu, Oboe Performance

    Get PDF
    Fantasia No. 7 / G.P. Telemann; Sonata for Oboe and Piano / F. Poulenc; Oboe Concerto in C Major, RV 447 / A. Vivaldi; Pas de deux / A. Y

    Venereau-type polynomials as potential counterexamples

    Full text link
    We study some properties of the Venereau polynomials b_m=y+x^m(xz+y(yu+z^2)), a sequence of proposed counterexamples to the Abhyankar-Sathaye embedding conjecture and the Dolgachev-Weisfeiler conjecture. It is well known that these are hyperplanes and residual coordinates, and for m at least 3, they are C[x]-coordinates. For m=1,2, it is only known that they are 1-stable C[x]-coordinates. We show that b_2 is in fact a C[x]-coordinate. We introduce the notion of Venereau-type polynomials, and show that these are all hyperplanes, and residual coordinates. We show that some of these Venereau-type polynomials are in fact C[x]-coordinates; the rest remain potential counterexamples to the embedding and other conjectures. For those that we show to be coordinates, we also show that any automorphism with one of them as a component is stably tame. The remainder are stably tame, 1-stable C[x]-coordinates.Comment: 15 pages; to appear in J. Pure and Applied Algebr

    Multidimensional C0C^0 transversality

    Full text link
    In 1994, Sakai introduced the property of C0C^0 transversality for two smooth curves in a two-dimensional manifold. This property was related to various shadowing properties of dynamical systems. In this short note, we generalize this property to arbitrary continuous mappings of topological spaces into topological manifolds. We prove a sufficient condition for the C0C^0 transversality of two submanifolds of a topological manifold and a necessary condition of C0C^0 transversality for mappings of metric spaces into Rn\mathbb{R}^n.Comment: 9 page

    Sparse 3D convolutional neural networks

    Full text link
    We have implemented a convolutional neural network designed for processing sparse three-dimensional input data. The world we live in is three dimensional so there are a large number of potential applications including 3D object recognition and analysis of space-time objects. In the quest for efficiency, we experiment with CNNs on the 2D triangular-lattice and 3D tetrahedral-lattice.Comment: BMVC 201

    А. Yu. Krymskyi about the history of consonantal system oof the Ukrainian language

    Get PDF
    Досліджено погляди А. Ю. Кримського на історію розвитку українського консонантизму. Твердження вченого проаналізовано в широкому контексті мовознавства 70-х рр. ХІХ ст. – 30-х рр. ХХ ст. Визначено, які тези вченого зберегли свою актуальність для сучасного мовознавства. The article is devoted to the study of A. Yu. Krymskyi’s views on the development of Ukrainian consonantal system. The scholar’s ideas are analysed at the background of Ukrainian Linguistics of the 70s of the 19th c. – 30s of the 20th c. The author states what A. Yu. Krymskyi’s ideas have preserved their topicality for modern Linguistics

    Unsupervised learning of generative topic saliency for person re-identification

    Get PDF
    (c) 2014. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.© 2014. The copyright of this document resides with its authors. Existing approaches to person re-identification (re-id) are dominated by supervised learning based methods which focus on learning optimal similarity distance metrics. However, supervised learning based models require a large number of manually labelled pairs of person images across every pair of camera views. This thus limits their ability to scale to large camera networks. To overcome this problem, this paper proposes a novel unsupervised re-id modelling approach by exploring generative probabilistic topic modelling. Given abundant unlabelled data, our topic model learns to simultaneously both (1) discover localised person foreground appearance saliency (salient image patches) that are more informative for re-id matching, and (2) remove busy background clutters surrounding a person. Extensive experiments are carried out to demonstrate that the proposed model outperforms existing unsupervised learning re-id methods with significantly simplified model complexity. In the meantime, it still retains comparable re-id accuracy when compared to the state-of-the-art supervised re-id methods but without any need for pair-wise labelled training data
    corecore